May 30
Last modified by sbrickey on 5/27/2016 12:18 PM
This article is long overdue, given how often I end up repeating myself.

SharePoint is a web application - however the users interact with it, in the end they're almost always interacting with code that is being run as a web application within IIS on the WFE's. (the tiny exception being PowerShell code and scheduled tasks, which may actually run on app servers, and interact directly with the database server - this is the 1%).

But as a computer is concerned, the web is a scary, dangerous, and sometimes harmful place to go and venture. As such, a wide array of technologies help protect you, when you open a web page. From the web browser itself, to the firewalls and antivirus/antimalware systems, to the operating system itself.

A Primary on Security
The most secure system, is the one that is never used. But the reality is, we don't build things to ignore them. We need to be able to go on the web, to access the services that it provides.
Security is therefore implemented on a scale. On the one hand, the most secure system is completely unusable. On the other hand, the most usable system is completely insecure. For people in the real world, computers need to know how to behave on that scale.

Security Within the Browser
For almost all people, working with SharePoint begins when the user opens a web browser (preferably Internet Explorer) to browse to their SharePoint site. In most cases, the browser doesn't know a SharePoint site from any other website, so it treats it as hostile. This means that it won't automatically tell the website who you are, or let the website open programs on your computer. In this scenario, your browser is placing its behavior on the secure side of the security scale, rather than the usability side. Only when it's told otherwise, will it change its behavior.

Additionally, to make SharePoint easier to use, people often want SharePoint to open programs on their computer. This is again a potentially VERY dangerous action to take! But, it also lets you synchronize your SharePoint calendar to Outlook for offline access or easier scheduling. Again, a scale between security and usability. Clearly, the browser needs to know which sites can be trusted for this most sacred of tasks.

Security Beyond the Browser
As much as SharePoint is a web application, it's also designed to be accessible/usable from other applications.
This presents two challenges:
- First, does the program trust that it's being launched (and told to open a file) under legitimate circumstances?
- Second, how will the application know which sites are trustworthy for things like automatic logon?

Most times, the application will simply trust the browser. Occasionally though, applications will keep their own set of settings in their own location.

Security Policy 1 : Internet Explorer : Intranet and Trusted Zones
Having already covered this in previous posts (IE Security - Trusted Sites and Intranet Zones), this is mostly an honerable mention.
This setting will basically put the browser into a very USABLE mode, with much lower security settings. Automatic login, launch programs, the works.

For those with ON-PREM SharePoint, using AD INTEGRATED authentication (NOT the same as ADFS), put your SharePoint site URLs in your INTRANET zone.
For those with ON-PREM SharePoint, using ADFS authentication, put your ADFS URLs in your INTRANET zone.
For those with CLOUD SharePoint, NOT using ADFS authentication, put your SharePoint site URLs in your TRUSTED zone.
For those with CLOUD SharePoint, using ADFS authentication, put your ADFS URLs in your INTRANET zone.

- The reason you use ADFS addresses is because that's when the browser actually tells the server which domain account you're logged in as, which is then sent back to SharePoint, behind your back so to speak.
- The reason that CLOUD uses TRUSTED sites (not ADFS) is because there's no way for the servers to match your computer login, so it just keeps and sends the name and password you provide.

Security Policy 2 : WebClient : Trusted Servers
When the user clicks Open With Explorer, the browser is actually telling Windows to open Explorer to the SharePoint URL. Windows then in turn will identify that web based folders are accessed using the WEBDAV protocol, which is provided by the Windows service called WebClient. When the WebClient attempts to open the URL, SharePoint wants to know the logon. The WebClient does NOT share the settings from Internet Explorer, so AD Integrated Authentication requires its own policy.

Reg Key : HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters\AuthForwardServerList (MultiString Value)
Value : URL pattern per line

Documentation :

Usability Policy : WebClient : Max File Size
Within SharePoint, the default max file size is 250mb. Unfortunately, the WebClient has its own default max size of 50mb. Often worth bumping up.

Reg Key : HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters\FileSizeLimitInBytes (DWORD)
Value : 50000000 is the default 50mb

Documentation :

Optional Policy : Office : Open Office document directly in Office application
This policy setting allows you to choose whether Office documents located on web servers open directly in the registered application or through the web browser. 
If you enable this policy setting, files will open directly in the associated Office application, bypassing the web browser.
If you disable this policy setting files will open through the web browser.

Reg Key : HKEY_CURRENT_USER\Software\Policies\Microsoft\Office\15.0\common\internet\opendirectlyinapp
Reg Key : HKEY_CURRENT_USER\Software\Policies\Microsoft\Office\16.0\common\internet\opendirectlyinapp
Reg Key : HKEY_LOCAL_MACHINE\software\policies\microsoft\Office\15.0\common\fileio\opendirectlyinapp
Reg Key : HKEY_LOCAL_MACHINE\software\policies\microsoft\Office\16.0\common\fileio\opendirectlyinapp

Branding Policies
Here are a few policies that you can set to help brand SharePoint to the users. These can help indicate to users that SharePoint is a trusted place to put their documents, or use it to indicate when SharePoint should NOT be used to store a file.

Branding Policy 1 : Office : Name your SharePoint site
When opening or saving documents within Office, you can override the default term 'SharePoint'.

Reg Key : HKEY_CURRENT_USER\Software\Policies\Microsoft\Office\15.0\common\sharepointintegration\productname
Reg Key : HKEY_CURRENT_USER\Software\Policies\Microsoft\Office\16.0\common\sharepointintegration\productname

Branding Policy 2 : Office : Name your SharePoint Sites heading
The folder name used to store network folder shortcuts published from SharePoint Server. "My SharePoints" (localized) by default. 

Reg Key : HKEY_CURRENT_USER\Software\Policies\Microsoft\Office\15.0\common\portal\linkpublishingfoldername
Reg Key : HKEY_CURRENT_USER\Software\Policies\Microsoft\Office\16.0\common\portal\linkpublishingfoldername

Disqus: disqus comments
May 25
Last modified by sbrickey on 5/23/2016 4:04 PM

People like to find reasons not to like SharePoint. I get that, though I often take a moment to listen to the reasons, because most of the time the reasons are bunk, and I can correct them. Not to say that SP doesn't have issues, but they aren't the big issues.

Last week someone was telling me why SharePoint doesn't have a "real" wiki.

These were the cited reasons...

- Doesn't use Markdown or other syntax.
  Granted. MD was built to make it easier for users to write HTML code, without knowing HTML, allowing users to use *word* instead of <b>word</b> for bold. And he was correct, SharePoint doesn't use Markdown. Technically neither does Wikipedia. In fact, Markdown isn't even an official spec (yet, which was really defined years after it was started, which is its own problem). But back to the point : You don't NEED markdown. You can highlight the text and simply click the bold button, or press CTRL+B. No markdown, no HTML, no code whatsoever. Seems like SP is actually BETTER in this case.

- Without Markdown, how do you see what changed?
  With version history. And SP even shows the differences. Just like (brief demo)

- Can't easily insert an image
  Yes, in fact you can. (brief demo)

- No automatic Table of Contents
  OK, but keep in mind, that the TOC, and also the list of categories at the end... are *not* built into the core WikiMedia distribution... they're added via extensions and templates.

- No automatically generated breadcrumb trail
  While it's true that MS has been making the SHAREPOINT breadcrumb trail harder and harder to find/use... wikis have NEVER had a breadcrumb trail, since there's no one way to get to a page! I personally have "applied" a breadcrumb trail, by simply putting it in the top of each page... as a general assumption of usability... but it's not enforced in ANY way. To do this, I simply use [[Home]] > [[PageA]] > [[Page1]] > [[ThisPage]]

- Editing is performed on the entire page, not just a section
  This was actually something that I pointed out, since it's true, but mainly to guage the significance of this. As it turned out, this was not a critical feature (huge surprise).

In the end I'd generally addressed his concerns, though the table of content request was still stuck in his mind. He ended up finding an article (link is below in references) which was supposed to build a TOC... unfortunately, I noticed that the referenced code was broken in several places... but I used it as a starting point.

The goal
Here's a trimmed down screenshot of the TOC from Wikipedia's SharePoint page. (sections 1 through 7 removed for simplicity)

The intent is to automatically generate this using the heading styles predefined in SharePoint, since they're a promoted method of separation.
Also, for several reasons, I didn't want to care about the numbering. I just wanted it to auto-build, tab for sections, and look roughly similar (ease adoption).

The Approach
This part is simple... a brief code snippet that will be saved to a file (in the Style Library), and then injected into the page using a Content Editor web part.
By using this approach, the "TOC" (web part) can quickly be added to a wiki page.
If necessary, use of content approval within the style library can provide testing and validation of future changes to the TOC code.

The code


#toc {

    display: table;

    border: 1px solid #aaa;

    background-color: #f9f9f9;

    font-size: 95%;

    padding: 7px;



#toc #tocHeader {

    font-weight: bold;

    text-align: center;



#toc a:before { /* content:"• "; */ }

#toc a { line-height: 15px; margin: 10px; }


#toc .toc_Level1 { margin-left5px; }

#toc .toc_Level2 { margin-left: 15px; }

#toc .toc_Level3 { margin-left: 25px; }

#toc .toc_Level4 { margin-left: 35px; }



<div id="toc"></div>


<script src=""></script>

<script type="text/javascript">

function GenerateTOC() {

  $("#toc").append('<p id="tocHeader">Contents</p>');


  $(".ms-rteElement-H1, .ms-rteElement-H1B, .ms-rteElement-H2, .ms-rteElement-H2B, .ms-rteElement-H3, .ms-rteElement-H3B, .ms-rteElement-H4, .ms-rteElement-H4B").each(function(i) {

    var currentNode = $(this);

    currentNode.attr("id", "title" + i);


    var linkClass = (currentNode.hasClass('ms-rteElement-H1') || currentNode.hasClass('ms-rteElement-H1B'))

                  ? "toc_Level1"

                  : (currentNode.hasClass('ms-rteElement-H2') || currentNode.hasClass('ms-rteElement-H2B'))

                  ? "toc_Level2"

                  : (currentNode.hasClass('ms-rteElement-H3') || currentNode.hasClass('ms-rteElement-H3B'))

                  ? "toc_Level3"

                  : (currentNode.hasClass('ms-rteElement-H4') || currentNode.hasClass('ms-rteElement-H4B'))

                  ? "toc_Level4"

                  : "";


    $("#toc").append("<a id='link'" + i + "' class='" + linkClass + "' href='#title" + i + "' title='" + currentNode.attr("tagName") + "'>" + currentNode.html() + "</a><br>");


    currentNode.append(" <a href='#tocHeader'>[top]</a>");








So a quick rundown...
- CSS to recreate some of the wikipedia look and feel
- An empty TOC (div element), which will be dynamically populated.
- Add a reference to jQuery (technically I should add this to the Site Assets library)
- The GenerateTOC function, which is necessary so that it can be called by _spBodyOnLoadFunctionNames, which gives SharePoint a chance to load the rest of the wiki content.
- Within the function:
  - add a "Contents" header (this could easily be hard coded, but whatever)
  - scan the page for the various Header styles (all the B styles are "colored" headers)
  - determine the CSS class for each header depth (each of which has a different left-hand margin)
  - add a link to the section, using the designated CSS class
  - append a "top" link to each of the header sections (returns to TOC)

I saved this to the style library as "WikiTOC.html", and add a CEWP linking to it.

Final Touches
After adding the web part, I name it "Table of Contents", but I set the branding to NONE... reason being that this already includes a header, so the Title is unnecessary... but I don't want to remove the title entirely, or it'll be confusing to work with in edit mode (when the title is shown regardless).

I could've also exported the web part, at which point users could simply add a "Wiki TOC Web Part"... maybe it'll happen in time.

The Result
Here's a snippet from a demo page... I include some pre-TOC notes, which could be an eventual introduction section, just to point out that the web part doesn't need to be the top-most element in the wiki page.


Disqus: disqus comments
May 20
Last modified by sbrickey on 5/18/2016 2:38 PM
The other day, a client wanted their portal to provide a notification to the user at each logon, reminding them to be aware of the documents they are uploading, and the users who can access them. Basically, don't upload sensitive info to a very public library.

I decided that the easiest answer was to use SharePoint's Status Notification Bar. A quick and gentle reminder. They can dismiss the reminder on the page, but never permanently (or even for any period of time).

I threw together a few lines of JavaScript, added a webpart, and was done.

In the Site Assets library, I added a file with the following:

<script type="text/javascript">

function PortalNotification() {

  var statusId = SP.UI.Status.addStatus("Notice", "Remember that you are responsible for the content in this site. It is up to you to consider the content, the audience, and any applicable restrictions. If you have any questions, please feel free to contact IT or Security.  &nbsp; <a id='lnkPortalNotificationClose' href='#'><b>(X)</B></a>", true);

  SP.UI.Status.setStatusPriColor(statusId, 'yellow');

  var lnkPortalNotificationClose = document.getElementById('lnkPortalNotificationClose');

  lnkPortalNotificationClose.onclick = function() { SP.UI.Status.removeStatus(statusId); };

  lnkPortalNotificationClose.href = "#";


ExecuteOrDelayUntilScriptLoaded(PortalNotification, "sp.js");


it's about as simple as can be... add the status... then grab the hyperlink, set the onclick code (since it needs to know the ID, which is only known after the status has been added). Then tell SP to load SP.JS and run the function.

Then I added a content editor web part, and linked it to the file.
Disqus: disqus comments
March 28
Last modified by sbrickey on 3/24/2016 9:23 AM

Since I realized that I'm not aware of any pages that provide this, I figured here's as good a place to put it as any.

This matrix is for the compatibility between SharePoint and the SQL servers used to host the SharePoint CONTENT AND SERVICE APPLICATION DATABASES ONLY.

This matrix has nothing to do with the BI integration (RS/AS), nor does it have anything to do with external SQL servers that SharePoint may CONSUME (via BCS, PerformancePoint, etc)

SQL Version                      SharePoint Version      Links
SQL 2014 + SP1 SP 2016 RTM ref:
SQL 2014 SP 2013 + SP1
+ April 2014 CU
SP April 2014 CU:
SQL 2012 SP 2013 RTM ref:
SQL 2008 R2 + SP1 SP 2013 RTM ref:
SQL Express SP1:
SQL 2012 SP 2010 + SP1 ref:
SQL 2008 R2 SP 2010 RTM ref:
SQL 2008 + SP1 + CU2 SP 2010 RTM ref:
SQL CU3/4 NOT recommended, CU5 is fine
SQL 2005 + SP3 SP 2010 RTM ref:

For SQL Reporting Services integration (BI features like PowerPivot, PowerView)


Disqus: disqus comments
March 14
Last modified by sbrickey on 3/16/2016 1:26 PM
Here are some links...

SharePoint Server 2016 :

Language Packs :

Project Server 2016 : Part of SP2016!

other: TBD...
Disqus: disqus comments
September 23
Last modified by sbrickey on 9/21/2015 9:23 AM
A while ago I was working with workflows programmatically, and I needed to see the XML data that SharePoint stored for each instance of a workflow (each time a workflow is run for a specific file/item).

For reasons that I don't recall offhand, I wanted to access it directly from the database (NOTE: I was doing this on a NON-PRODUCTION box, using a BACKUP copy of the content database - DO NOT DO THIS IN A PRODUCTION ENVIRONMENT)

Since the XML data is actually stored in as an encrypted blob in the content database, I also needed to decompress the data.

Here's the script...

$tConn = New-Object System.Data.SqlClient.SqlConnection("Server=SPSQLSERVER;Database=WSS_Content;Trusted_Connection=True;")

$tCmd = New-Object System.Data.SqlClient.SqlCommand
$tCmd.Connection = $tConn
$tCmd.CommandType = [System.Data.CommandType]::Text
$tCmd.CommandText = @"
SELECT [InstanceData]
  FROM [WSS_Content].[dbo].[Workflow] WITH (NOLOCK)
  WHERE [SiteId] = 'f6a33bae-0bfe-425f-9d9e-15e230a2b574'
    AND [WebId]  = '85758953-1d55-4c30-82d4-475b5f222693'
AND [ListId] = '184226f9-a872-4e25-a780-9a5b5ed5a518'
AND [Id] = '3434074B-F9B8-42FE-92CC-5FB1089D5E0F'


$tReader = $tCmd.ExecuteReader()
while ($tReader.Read())
    $InstanceData = $tReader["InstanceData"]




$ms = New-Object System.IO.MemoryStream, $InstanceData)
$gzStream = New-Object System.IO.Compression.GZipStream($ms , ( [System.IO.Compression.CompressionMode]::Decompress ) )
$sb = New-Object System.Text.StringBuilder

$rawData = new-object byte[] 1024

while ( $gzStream.Read( $rawData , 0 , $rawData.Length ) -gt 0 )
    [void]$sb.Append( [System.Text.Encoding]::Default.GetString( $rawData ) )

$s = $sb.ToString()


[System.Windows.Forms.Clipboard]::SetText( $sb.ToString() )

Disqus: disqus comments
August 31
Last modified by sbrickey on 8/28/2015 11:08 AM
The other day I was investigating a rogue GUID that appeared in SharePoint. The specifics aren't really relevant, except for two facts: First, SharePoint farms consist of several databases (10 minimum for Foundation edition, 18 for Standard edition, and 19 for Enteprrise edition). And second, SharePoint loves to use GUIDs; just about every primary key is a GUID.

Anyway. I was seeing a GUID, and I wanted to know where it came from.

Now, for those who aren't familiar with the SharePoint content database schemas (which you shouldn't worry about - fiddling with the databases is generally bad, and leaves you unsupported if anything goes wrong), a single TABLE is used to store ALL of the metadata across ALL of the site collections in the database. To accomplish this, the metadata table is MASSIVE (and is extremely tuned by MS). Since MS and SP have no way of knowing how much data you'll store in your metadata, they simply provide a large capacity of each type.

The last part is important, because I'm looking for a GUID, and the metadata table contains 6 core GUID fields (ID, SiteID, ListID, WorkflowID, ParentID, and DocID) as well as 350 fields for user defined metadata.
That's just ONE table (albeit the most tedious).

So I needed a better way. I needed something automatic (or at least really close).

To the TSQL drawing board, aka SQL Management Studio!

My plan was simple: have SQL build me the query.
First I needed to know about the columns. A quick search revealed the INFORMATION_SCHEMA.COLUMNS dmv, which includes the names of the database, schema, table, and column, as well as the column's datatype and row position/ordinal.

Next I needed to combine the rows of columns into a single string. For things like that, I prefer CTE's.
Since there's no guarantee that the GUID rows will be sequential, I can't just join the previous column to the next based on RowOrdinal+1... and doing a simple greater-than (and then filtering on max number of columns) turned out to be MASSIVELY slow, since the number of results expands very quickly when you've got every combination of 300 sequentially incrementing columns... so I needed a RowCounter that was partitioned to the table, to give each column its own incrementing number, which I could use to JOIN the current column to the next as I built my string of columns.

Finally, having the concatenated rows, I throw one more RowCount on the CTE results, to ignore all the records with incomplete strings.

Bump the recursion limit from 100, make sure the queries use NOLOCK (since we have no idea about the column indexes), adjust the string manipulation so that it's a bit more reusable (since it's being posted here), and it's done.

The Query

DECLARE @SearchFieldType  nvarchar(MAX) = 'uniqueidentifier'

      , @SearchComparison nvarchar(MAX) = 'LIKE'

      , @SearchText       nvarchar(MAX) = '00000000-0000-0000-0000-000000000000'


;WITH TableFieldsCTE ( [DB] , [Schema] , [Table] , [FieldCount] , [Fields] ) AS




         , [TABLE_SCHEMA]

         , [TABLE_NAME]

         , CAST(0 AS bigint)

         , CAST('' AS nvarchar(MAX))







         , [TABLE_SCHEMA]

         , [TABLE_NAME]

         , [FieldIdx]

         , TableFieldsCTE.[Fields] + '[' + CAST([COLUMN_NAME] AS nvarchar(MAX)) + '] '
                                   + @SearchComparison + ' ''' + @SearchText + ''' OR '

      FROM TableFieldsCTE


             SELECT *




           ) C

        ON C.[TABLE_CATALOG]    = TableFieldsCTE.[DB]

       AND C.[TABLE_SCHEMA]     = TableFieldsCTE.[Schema]

       AND C.[TABLE_NAME]       = TableFieldsCTE.[Table]

       AND C.[FieldIdx]         = TableFieldsCTE.[FieldCount] + 1




     , [Schema]

     , [Table]

     , [Query] = 'SELECT * FROM [' + [Table] + '] WITH (NOLOCK) WHERE ' + SUBSTRING( [Fields] , 0 , LEN([Fields]) - 2 )

  FROM (

        SELECT [DB], [Schema], [Table], [Fields]

             , [RowFilter] = ROW_NUMBER() OVER (PARTITION BY [DB], [Schema], [Table] ORDER BY [FieldCount] DESC)

          FROM TableFieldsCTE

       ) A

 WHERE [RowFilter] = 1


The Results (sample)

WSSContent_CentralAdmin    dbo    AllDocs              SELECT * FROM [AllDocs] WITH (NOLOCK) WHERE [Id] LIKE '00000000-0000-0000-0000-000000000000' OR [SiteId] LIKE '00000000-0000-0000-0000-000000000000' OR [ParentId] LIKE '00000000-0000-0000-0000-000000000000' OR [WebId] LIKE '00000000-0000-0000-0000-000000000000' OR [ListId] LIKE '00000000-0000-0000-0000-000000000000' OR [ScopeId] LIKE '00000000-0000-0000-0000-000000000000' OR [TransformerId] LIKE '00000000-0000-0000-0000-000000000000'

WSSContent_CentralAdmin    dbo    AllDocVersions       SELECT * FROM [AllDocVersions] WITH (NOLOCK) WHERE [SiteId] LIKE '00000000-0000-0000-0000-000000000000' OR [Id] LIKE '00000000-0000-0000-0000-000000000000'

WSSContent_CentralAdmin    dbo    AllFileFragments     SELECT * FROM [AllFileFragments] WITH (NOLOCK) WHERE [SiteId] LIKE '00000000-0000-0000-0000-000000000000' OR [DocId] LIKE '00000000-0000-0000-0000-000000000000'

WSSContent_CentralAdmin    dbo    AllLinks             SELECT * FROM [AllLinks] WITH (NOLOCK) WHERE [SiteId] LIKE '00000000-0000-0000-0000-000000000000' OR [ParentId] LIKE '00000000-0000-0000-0000-000000000000' OR [DocId] LIKE '00000000-0000-0000-0000-000000000000' OR [WebPartId] LIKE '00000000-0000-0000-0000-000000000000' OR [FieldId] LIKE '00000000-0000-0000-0000-000000000000'

Obvious this doesn't include all of the tables... but the query is pretty flexible... the variables at the top define what type of data to look for, how to compare it, and the compared value... or feel free to modify it for your own needs.


I was originally thinking of actually executing the queries in a loop... decided that I can just as easily copy/paste the entire column of queries into a new window, and it's faster than the time it'd take to write the loop.

Disqus: disqus comments
April 10
Last modified by sbrickey on 4/7/2015 2:51 PM

Introduced back in SharePoint 2007 (MOSS), was a Workflow web service that was designed to allow external applications to integrate with workflows hosted within SharePoint. Unfortunately, the web service is poorly documented, which leads to articles that provide a wide range of results, which are often specific to how the workflow is used.

While much of the web service can be figured out easily enough, based on documentation or by online examples, the AlterToDo method seems to cause a lot of confusion. So this article is focused on de-mystifying the voodoo.

A simple search for "workflow AlterToDo" reveals:
- The MSDN documentation :
  - While some of the parameters are very clear (URL of item, ID of task, GUID of task list), the "taskData" hash table has effectively no documentation.
- The MSDN documentation for the Server Object Model :
  - This is the method that the Web Service calls
- Questions on Stack Overflow... mostly unanswered, and all without a clear understanding of "what" or "why"
   - I find this one especially funny, since it again lacks an understanding, plus the approved answer is completely overkill.
- lots of links to this tutorial:
  - this is the closest thing I've found to a good tutorial for properly using the various web methods, yet the comments agree that it lacks sufficient documentation.
- another approach, again without any documentation:

So... let's dig in...

Microsoft documents the signature as:

[SoapDocumentMethodAttribute("", RequestNamespace="", ResponseNamespace="", Use=SoapBindingUse.Literal, ParameterStyle=SoapParameterStyle.Wrapped)] 

public XmlNode AlterToDo (
	string item,
	int todoId,
	Guid todoListId,
	XmlNode taskData 


The first three parameters are pretty simple... let's focus on the XmlNode taskData...
According to the documentation, it's a hash table that's used to update the task item... this is true, but as we're dealing with XML, let's also look at the specifics...

Reflected : WebMethod
The reflected code (with some adjusted formatting for clarity)


public SoapXml.SoapXmlElement AlterToDo(string item, int todoId, Guid todoListId, SoapXml.SoapXmlElement taskData)




        if (taskData == null)

            throw new ArgumentNullException("taskData");

        XmlDocument xmlDocument = new XmlDocument();


        Hashtable taskParameters = !(xmlDocument.FirstChild.Name == "dfs:myFields")

                                 ? XmlUtil.XmlToHashtable2(taskData.OuterXml)

                                 : XmlUtil.XsnXmlToHashtable(SPContext.GetContext(this.Context), taskData.OuterXml);

        return SoapXml.String2SoapXmlElement(this.WorkflowImpl(item).AlterToDo(todoId, todoListId, taskParameters));


    catch (Exception ex)


        throw this.HandleException(ex, "Workflow Soap: AlterToDo", item, todoId, todoListId);



So the first thing worth noting, is that dfs:myFields indicates that InfoPath Forms Services is being used to update the task data.

Let's also look at the reflected code from the two XmlUtil methods.

Reflected : XmlUtil.Xml2Hashtable2
So in most/normal cases, Xml2Hashtable2 is used...

internal static Hashtable XmlToHashtable2(string strXml)


    Hashtable hashtable = new Hashtable();

    XmlDocument xmlDocument = new XmlDocument();


    foreach (XmlNode xmlNode in xmlDocument.FirstChild.ChildNodes)


        string str = XmlConvert.DecodeName(xmlNode.LocalName);

        if (xmlNode.HasChildNodes)


            if (xmlNode.ChildNodes.Count == 1 && !xmlNode.FirstChild.HasChildNodes)

                hashtable[str] = xmlNode.InnerText;


                hashtable[str] = xmlNode.InnerXml;



            hashtable[str] = xmlNode.InnerText;


    return hashtable;


this is actually pretty straight forward... it looks into the XML, assumes that the first element is used to define the values... then for each of its child elements, assigns the hash table's key based on the name... the value is determined based on whether the child element also contains child elements (nested XML), in which case the entire subset of XML is used for the value.

Here are some sample tests and results:


=> [Key1] = "Value1"
=> [Key2] = "Value2"



=> [Key1] = "Value1"
=> [Key2] = "


<my:Data xmlns:my="http://anything">

=> [Key1] = "Value1"
=> [Key2] = "Value2"

Basically, any well formed XML document is acceptable. Just provide a single top level element, and a child element for each entry in the hash table.

Reflected : XmlUtil.XsnXmlToHashtable
Now let's look at XsnXmlToHashtable, which is normally used when InfoPath Forms Services generates the XML (since it uses the my:fields schema, which should also be a give-a-way, given the Xsn part of the method name, since Xsn is the InfoPath form template file extension)...

internal static Hashtable XsnXmlToHashtable(SPContext context, string strXml)


    return XmlUtil.XsnXmlToHashtable(strXml, true, context);


internal static Hashtable XsnXmlToHashtable(string strXml, bool bIsTaskXml, SPContext context)


    Hashtable hashtable = new Hashtable();

    XmlDocument xmlDocument = new XmlDocument();





    catch (XmlException ex)


        return hashtable;



    XmlNode firstChild = xmlDocument.FirstChild;

    XmlNamespaceManager nsmgr = new XmlNamespaceManager(xmlDocument.NameTable);

    string xpath;

    if (bIsTaskXml)




        xpath = "/dfs:myFields/dfs:dataFields/my:SharePointListItem_RW";




        if (firstChild.LocalName != "myFields")

            return XmlUtil.XmlToHashtableDoDecode(strXml);



        xpath = "/dfs:myFields/dfs:dataFields/d:SharePointListItem_RW";


    XmlNode xmlNode1 = xmlDocument.SelectSingleNode(xpath, nsmgr);

    if (xmlNode1 == null)

        return hashtable;

    foreach (XmlNode xmlNode2 in xmlNode1.ChildNodes)


        string str = xmlNode2.LocalName;

        if (bIsTaskXml)


            if (str.StartsWith("FieldName_"StringComparison.Ordinal))


                if (str == "FieldName_TaskStatus")

                    str = "TaskStatus";

                DateTime result;

                if (context != null &&

                    DateTime.TryParse(xmlNode2.InnerText, CultureInfo.InvariantCulture, DateTimeStyles.RoundtripKind, out result) &&

                    result.Kind == DateTimeKind.Unspecified)


                    result = DateTime.SpecifyKind(result, DateTimeKind.Local);

                    DateTime dateTime = DateTime.SpecifyKind(context.RegionalSettings.TimeZone.LocalTimeToUTC(result), DateTimeKind.Utc);

                    xmlNode2.InnerText = dateTime.ToString("o", CultureInfo.InvariantCulture);






        if (!xmlNode2.HasChildNodes ||

            xmlNode2.HasChildNodes &&

            xmlNode2.ChildNodes.Count == 1 &&



            XmlNode xmlNode3 = xmlNode2.SelectSingleNode("@Description");

            if (xmlNode3 != null &&

                hashtable[str] = (xmlNode2.InnerText + ", " + xmlNode3.Value);


                hashtable[str] = xmlNode2.InnerText;



            hashtable[str] = xmlNode2.InnerXml;


    return hashtable;


This is obviously a lot more complex... and probably where people tend to get into trouble when using online examples...
- It loads the XmlDocument, which checks that the XML is a well formed document.
- It defines the XML namespaces used by InfoPath... for non-task forms, InfoPath uses DFS and D namespaces... for task forms, InfoPath uses DFS and MY
- It defines the XPath to the data, since InfoPath places all SharePoint item lookups under /dfs:myFields/dfs:dataFields/d:SharePointListItem_RW
- It then grabs the data (using the XPath above), and loops through each of the child elements
  - determine the hashtable's key from the element name
  - SPECIAL CASE: If the element's name starts with FieldName_TaskStatus, then let's just pretend it means TaskStatus
  - SPECIAL CASE: if the element's name starts with FieldName_, and it's a date/time, adjust the time to UTC
  - Assign the hashtable's value from the Element's value (InnerText, or InnerXML of the element contains more XML, just as XmlToHashtable2 does)
  - SPECIAL CASE: if the element has a "Description" attribute, append it to the value

Given the complexity, and the fact that this function is designed to be used (internally) by InfoPath Forms Services, I'm not going to show some test cases.

Reflected : Workflow.AlterToDo
Now that we've identified how the parameters are defined, let's also look at how they're used...

public string AlterToDo(int todoId, Guid todoListId, Hashtable taskParameters)


    SPSite siteFallback = null;

    SPWeb webFallback = null;



        SPList byIdWithFallback = this.GetListByIdWithFallback(todoListId, out siteFallback, out webFallback);

        if (byIdWithFallback == null)

            throw SoapServerException.MakeSoapException("Bad 'todoListId' Parameter.", WorkflowImpl.WorkflowBadParameter);


        SPListItem itemById = byIdWithFallback.GetItemById(todoId);

        if (itemById == null)

            throw SoapServerException.MakeSoapException("Bad 'todoId' Parameter.", WorkflowImpl.WorkflowBadParameter);


        FormUtil.ConvertToWSSTypes(itemById, taskParameters);

        return "<fSuccess>" + (SPWorkflowTask.AlterTask(itemById, taskParameters, true) ? "1" : "0") + "</fSuccess>";




        if (webFallback != null)


        if (siteFallback != null)




So basically, do some quick lookups, but the real logic is in the FormUtil.ConvertToWSSTypes call, and then the SPWorkflowTask.AlterTask call... Time to reflect further.

Reflected : FormUtil.ConvertToWSSTypes
This function is basically responsible for making sure that the values in the hash table are formed correctly for SharePoint. This would include things like usernames being converted to UserInfo references, etc.

internal static void ConvertToWSSTypes(SPListItem task, Hashtable htFormData)


    string[] strArray1 = new string[htFormData.Keys.Count];

    htFormData.Keys.CopyTo(strArray1, 0);

    foreach (string str in strArray1)


        bool flag1 = false;                          // field uses static name 

        bool flag2 = task.Fields.ContainsField(str); // field has been located 

        if (!flag2)


            flag2 = task.Fields.ContainsFieldWithStaticName(str);

            flag1 = true;


        if (flag2)


            SPField spField = null;


            {   // attempt to locate the SPField

                spField = flag1 ? task.Fields.TryGetFieldByStaticName(str) : task.Fields.GetField(str);


            catch { }

if (spField != null)


                if (spField.Type == SPFieldType.User)


                    string[] strArray2 = FormUtil.ToUsers(htFormData[str]);

                    if (strArray2 == null || strArray2.Length == 0)






                        SPPrincipal spPrincipal = null;


                        {   // attempt lookup from UserInfo by email address

                            spPrincipal = task.Web.AllUsers.GetByEmail(strArray2[0]);


                        catch (SPException ex) { }


                        if (spPrincipal == null)

                        {   // attempt lookup from UserInfo by name



                                spPrincipal = task.Web.AllUsers.Item[strArray2[0]];


                            catch (SPException ex) { }


                        if (spPrincipal == null)

                        {   // attempt lookup from AD (and add into UserInfo)



                                SPPrincipalInfo spPrincipalInfo = SPUtility.ResolvePrincipal(task.Web, strArray2[0], SPPrincipalType.All, SPPrincipalSource.All, null, false);

                                if (spPrincipalInfo != null)


                                    if (spPrincipalInfo.PrincipalId == -1 &&

                                        (spPrincipalInfo.PrincipalType == SPPrincipalType.User ||

                                         spPrincipalInfo.PrincipalType == SPPrincipalType.SecurityGroup))

                                        task.Web.SiteUsers.Add(spPrincipalInfo.LoginName, spPrincipalInfo.Email, spPrincipalInfo.DisplayName, null);

                                    spPrincipal = task.Web.AllUsers.Item[spPrincipalInfo.LoginName()];



                            catch (SPException ex) { }


                        if (spPrincipal == null)

                        {   // attempt lookup from Groups by name

                            if ((spField as SPFieldUser).SelectionMode == SPFieldUserSelectionMode.PeopleAndGroups)




                                    spPrincipal = task.Web.SiteGroups.Item[strArray2[0]];


                                catch (SPException ex)





                        htFormData[str] = spPrincipal;



                else if (spField.Type == SPFieldType.DateTime)


                    if (string.IsNullOrEmpty(htFormData[str]))

                        htFormData[str] = null;


                        htFormData[str] = FormUtil.ToDate(htFormData[str]);


                else if (spField.Type == SPFieldType.MultiChoice &&

                    htFormData[str] = FormUtil.ConvertMultiChoiceFieldValue(htFormData[str], true);





I've adjusted the formatting, but all said and done, it's actually not too much... it's basically just checking the type of field, validating users, formatting dates, etc.

Reflected : SPWorkflowTask.AlterTask
Clearly, the real magic lies in SPWorkflowTask.AlterTask

public static bool AlterTask(SPListItem task, Hashtable htData, bool fSynchronous)


    if (task == null)

        throw new ArgumentNullException();


    if (htData != null)

        SPWorkflowTask.SetWorkflowData(task, htData, true);


    DateTime now = DateTime.Now;

    task[SPBuiltInFieldId.PendingModTime] = (object)now;



    if (!fSynchronous)

        return true;


    int millisecondsTimeout = 200;

    int id = task.ID;

    SPList parentList = task.ParentList;



        millisecondsTimeout += 200;




            task = parentList.GetItemById(id);


        catch (ArgumentException ex)


            task = null;




    while ((int)task[SPBuiltInFieldId.WorkflowVersion] > 1 && millisecondsTimeout < 3400);

    bool flag = task == null;

    object obj = task == null ? null : task[SPBuiltInFieldId.PendingModTime];

    if (obj != null)


        DateTime dateTime = (DateTime)obj;

        flag = now.Date == dateTime.Date && now.Hour == dateTime.Hour && now.Minute == dateTime.Minute && now.Second == dateTime.Second;


    return flag;


So, some magic is actually occurring here...
- First, it sets any fields that were provided in the HashTable using SetWorkflowData (more on this later)
- Second, it sets PendingModTime (overriding if it was provided in the hash table)
- if the operation is supposed to be performed asynchronously, it returns immediately
- otherwise, it waits until the workflow version has increased (thus proving that the workflow has started).

Since we're especially interested in the fields, let's look into that SetWorkflowData method as well...

Reflected : SPWorkflowTask.SetWorkflowData
Generally speaking, this method assigns the hashtable data (which was previously formatted for SharePoint) to the SPListItem's fields

internal static void SetWorkflowData(SPListItem task, Hashtable newValues, bool ignoreReadOnly)


    object obj1 = task[SPBuiltInFieldId.ContentTypeId];

    string id = obj1 as string;

    if (!(id == null ? obj1 : new SPContentTypeId(id)).IsChildOf(SPBuiltInContentTypeId.WorkflowTask))

        throw new NotSupportedException();


    Hashtable propertiesAsHashtable = SPWorkflowTask.GetExtendedPropertiesAsHashtable(task);

    foreach (DictionaryEntry dictionaryEntry in newValues)

        propertiesAsHashtable[dictionaryEntry.Key] = dictionaryEntry.Value;


    StringBuilder stringBuilder = new StringBuilder(1024);

    foreach (DictionaryEntry dictionaryEntry in propertiesAsHashtable)


        object obj2 = dictionaryEntry.Key;

        SPField spField = obj2 is string

                        ? task.Fields.GetField(obj2, false)

                        : (obj2 is Guid ? task.Fields.GetFieldById(obj2, false) : null);


        if (spField == null && obj2 is string && ((string)obj2).StartsWith("FieldName_"))


            spField = task.Fields.TryGetFieldByStaticName((string)obj2);

            if (spField != null)

                obj2 = spField.Id;


        if (spField != null)


            if (!ignoreReadOnly || !spField.ReadOnlyField)


                if (obj2 is string)

                    task[obj2] = dictionaryEntry.Value;


                    task[obj2] = dictionaryEntry.Value;



        else if (dictionaryEntry.Value != null)


            if (!obj2.ToString().StartsWith("ows_"))





            stringBuilder.Append("' ");



    task[SPBuiltInFieldId.ExtendedProperties] = stringBuilder.ToString();


This method is deceptive...
- First, it uses some fancy code to determine that the SPListItem's content type inherits from Workflow Task.
- Second, it gets the task's ExtendedProperties, and updates THEM using the values in the hash table.
- Third, it gets the field, either by display name, or by ID if the key is a GUID
  - SPECIAL CASE: if the hash table's key starts with "FieldName_", it is assumed to be the field's STATIC/INTERNAL name
- Assuming it found the field, the fields' values is set from the hash table (overriding certain ReadOnly fields where relevant)
- If the field was NOT found, it is encoded (using "ows_" as a prefix to the key) and added to the Extended Properties

The real magic is in that last statement. If for any reason, the hash table's key isn't actually a field in the task, it's STILL KEPT... albeit hidden

Here's what the Extended Properties look like for two versions of the same task...

Version == 1 : newly created

 ows_FieldName_ConsolidatedComments='Approval started by USERNAME on DATE TIME &amp;lt;br /&amp;gt;Comment: '


Version == 2 : Approved






 ows_FieldName_ConsolidatedComments='Approval started by USERNAME on DATE TIME &#xD;&#xA;Comment: '




Notice that the FieldName_ prefix is USUALLY used, but TaskStatus is an exception.
- All of the fields with the FieldName_ prefix don't actually belong to the task's metadata... they are ACTUALLY used by the WORKFLOW to store data during the Approval Process.
- TaskStatus doesn't have a prefix, because when InfoPath Forms Services generates the XML for AlterTask, which uses the dfs:myFields element, which results in the XML being converted to a HashTable using XmlUtil.XsnXmlToHashtable, which treats TaskStatus as a special case.

Examining the use of ExtendedProperties
OK, so we now know how the data provided to AlterToDo is converted into a hash table, and assigned to the task metadata, or assigned to the ExtendedProperties... but so far we've not seen how that matters. As it turns out, ExtendedProperties are what workflows use internally, as a generic placeholder for its data.

Tasks 301: SharePoint Server (standard edition) provides some out-of-the-box workflow actions, such as performing the Approval Process, or Collecting Feedback. In actuality, the Approval Process tasks are actually the same tasks used by Collecting Feedback, with a touch of extra logic at the end to set the approval status. As a result, the logic that is involved in RECEIVING the ExtendedProperties is actually in CollectDataTask, in the AfterOnTaskChanged event handler!

Reflected : CollectDataTask.AfterOnTaskChanged

private void AfterOnTaskChanged(object sender, EventArgs e)


    if (this._outProperties.ExtendedProperties.Contains(SPBuiltInFieldId.FormData))


        CollectDataTask collectDataTask = this;

        int num = collectDataTask._taskShouldContinue & string.CompareOrdinal(this._outProperties.ExtendedProperties[SPBuiltInFieldId.FormData], "Completed") != 0

                ? 1 : 0;

        collectDataTask._taskShouldContinue = num != 0;


    if (this._outProperties.ExtendedProperties.Contains("TaskStatus"))


        CollectDataTask collectDataTask = this;

        int num = collectDataTask._taskShouldContinue & string.CompareOrdinal((string)this._outProperties.ExtendedProperties["TaskStatus"], "Completed") != 0

                ? 1 : 0;

        collectDataTask._taskShouldContinue = num != 0;


    this.OutProperties = this._outProperties;


Here, we start to see what matters...
There is a property, called _taskShouldContinue, which:
  - is set if the ExtendedProperties includes FormData
  - is set (possibly overridden) if the ExtendedProperties includes TaskStatus
Now, why it does this is anyone's guess. Feel free to reflect into the code yourself if you don't believe this. Maybe it's been changed in an update.

Proper Usage
So, where does this leave us as far as the PROPER way to update the workflow tasks?

1. Create an XML snippet with a single root element
  - you CAN use dfs:myFields as the root element, but I would avoid it, since it's explicitly designed for InfoPath.

2. All said and done, you NEED to include TWO properties:
  - WorkflowOutcome, to determine whether the task was Approved or Rejected
    [Field ID="{18e1c6fa-ae37-4102-890a-cfb0974ef494}" StaticName="WorkflowOutcome" Title="Outcome"]

  - TaskStatus = "Completed"
    [Field ID="{c15b34c3-ce7d-490a-b133-3f4de8801b76}" StaticName="Status" Title="Status"]

3. To be PROPER, you should also set the following metadata:
    without these, the task will still be considered "completed", but still show inconsistencies such as 0% complete, or Status of "Not Started".

  - Completed = "TRUE"
    [Field ID="{35363960-d998-4aad-b7e8-058dfe2c669e}" StaticName="Completed" Title="Completed"]

  - PercentComplete = 1.0
    [Field ID="{d2311440-1ed6-46ea-b46d-daa643dc3886}" StaticName="PercentComplete" Title="% Complete"]

4. Optionally, you can set the following:
  - FormData = "Completed"
    [Field ID="{78eae64a-f5f2-49af-b416-3247b76f46a1}" StaticName="FormData" Title="Form Data"]
    TaskStatus takes precedence over this, but it wouldn't hurt

5. Finally, if you have custom fields in your task, you will need to assign them. These will be entirely dependent on your customizations.

Additional Use of SPWorkflowTask.SetWorkflowData
While digging through other areas of code, this snippet was also discovered, again setting TaskStatus, WorkflowOutcome (required), as well as PercentComplete and Completed (proper)

public void CompleteTask(Guid taskId, string taskOutcome)




        SPListItem task = this.GetTask(taskId);

        if (task == null)


        Hashtable newValues = new Hashtable();

        newValues[(object)SPBuiltInFieldId.TaskStatus] = SPResource.GetString(new CultureInfo(task.Web.Language, false), "WorkflowTaskStatusComplete", new object[0]);

        newValues[(object)SPBuiltInFieldId.WorkflowOutcome] = taskOutcome;

        newValues[(object)SPBuiltInFieldId.PercentComplete] = 1;

        newValues[(object)SPBuiltInFieldId.Completed] = "TRUE";

        if ((bool)task[SPBuiltInFieldId.SendEmailNotification])

            newValues[(object)SPBuiltInFieldId.SendEmailNotification] = task[SPBuiltInFieldId.Editor] == null || task[SPBuiltInFieldId.AssignedTo] == null

                                                                      ? 1

                                                                      : task[SPBuiltInFieldId.Editor].ToString() != task[SPBuiltInFieldId.AssignedTo].ToString()

                                                                      ? 1

                                                                      : 0;

        SPWorkflowTask.SetWorkflowData(task, newValues, false);

        this.PendingTask[taskId] = task;



            new SPPendingWork(

                new SPWorkflowExternalDataExchangeService.BatchOperation(this.CommitTask),



                new object[5]


                  (object) task,

                  (object) taskId,

                  (object) this.PendingTask,

                  (object) this.CurrentWorkflow.TaskListId,

                  (object) null





    catch (Exception ex)


        ULS.SendTraceTag(926049840U, (ULSCatBase)ULSCat.msoulscat_WSS_Workflow, ULSTraceLevel.Medium, "{0}", new object[1] { ex.Message });





Disqus: disqus comments
February 11
Last modified by sbrickey on 2/10/2015 10:24 PM

Far too often, someone asks how they can best implement item level security. Generally, this occurs when a form is being designed, with the next step being to design a workflow that will restrict access to the form to the user who created it.

When I hear these questions, my spidey sense tells me that there's a good chance that asking some additional questions will reveal more problems in the solution's architecture.

And while this article tends to target SharePoint, I am not saying that SharePoint is specifically to blame.

> The first, and most common objection, is a result of the SharePoint security model, combined with the underlying SQL database system. Those unfamiliar with the topic may wish to read Understanding SharePoint's Security Model
The net result, is that any given item in SharePoint is subject to analysis of potentially dozens of rows of data. Multiply this by the number of files being considered at a given time (such as all the files in the root of a document library), and the result is MASSIVE performance penalties.
While the link is specific to SharePoint's security model, the same can be said of NTFS permissions, or row-based security implemented within a database. The issue simply boils down to tons of files, multiplied by a few ACLs, possibly some AD lookups (group memberships and whatnot)... it's simply time consuming.

> The second, and far less frequently discussed topic, is that of delegation during absense.
For example: what happens if one of the users happens to leave the organization, or change departments, or change roles?
In some cases, the user will gladly continue fulfilling the requests from their old role... but really, this need should be addressed by allowing the old person to remain in the old role while transitioning to the change.

> The final topic, also not commonly addressed during the initial planning, is that of archiving the data.
Good or bad, most solutions don't really address data expiration or archival. Most solutions focus only on the ACTIVE dataset, and only worry about data archival if and when it's necessary. In the SharePoint world, this tends to happen when the Item View Threshold is met, usually at its default value of 5000 (based on SQL's lock escalation).

Considering Security Needs
Generally speaking, there are very few cases which REQUIRE item level security, both in the short term, and in the long term.

- Only data that includes personally sensitive information (SSNs and such) is actually in need of user based item level security, both in the short term, and in the long term. In these cases, I would question whether the sensitive information is actually necessary in the form. Perhaps it's redundant to information in an HRIS database; perhaps it's better stored in the user profile database in a secured property... there are lots of places to store data securely, and the form is probably not the best place.

- Some data is only worth securitying while it's active. For example, a purchase order for a replacement chair isn't necessarily relevant if the employee moves offices for a new department. These items may be worth securing while a workflow is in progress, but can be reset when complete.

- Most data, however, simply doesn't REQUIRE user based item level security, and instead should be hidden with a filtered view.

Considering Data Archival
The crux of the problems that result from item level security is in the volume of data being considered at once. We can address this problem by segmenting the data such that only reasonable volumes of data need to have their permissions analyzed.

- For data that does not need permanent user-based item level security, the simplest solution is often sufficient: create an 'archive' folder with a core set of permissions (such as read-only permissions for everyone). Once the item is ready for archival, its inheritance permissions can be restored, and the item moved into the 'archive' folder.

- For data that needs to permanently maintain user based item level security, a folder and metadata based archival design should be considered.
  - Generally speaking, folders provide separation that is applied to most default use cases, such as views and explorer integration (WebDAV). In some cases, a simple "archive" folder may be sufficient, while most cases likely require date based subfolders, including multiple tiers for parts of a date such as year / month. The additional subfolders tend to be necessary to keep the number of items per folder below the item view threshold.
  - Additionally, the use of metadata is often necessary to maintain support for advanced views that specifically subvert the folder structure. These views may be used for reporting purposes, and thus need to include historical data. In these cases, the use of indexes and filters can be applied to reduce the number of items below the item view threshold.

Disqus: disqus comments
February 5
Last modified by sbrickey on 2/3/2015 2:56 PM
This question is fairly common, so here we go...

Permissions are always tricky, and many systems implement permissions differently. Some systems are overly simplistic, while others are overly complex. Here are some historical examples of permission systems:

- UNIX / POSIX style (Linux, BSD, etc)
  Read | Write | Execute   --   User | Group | World
  The default ("World") can be granted permissions to Read, Write, and/or Execute (such as batch scripts)
  Members of the Group to which the file belongs, has overriding permission to Read, Write, and/or Execute
  Finally, the User to which the file belongs, has overriding permission to Read, Write, and/or Execute
  if you are not given permission through one of these three options, you have no permission to the file (or directory)

  Inherit from parent (yes/no)
  Read | Write | List Folder Contents (for directories)  --  Local User / Local Group / Domain User / Domain Group  --  Allow / Deny
  When inheriting from parent, unique permissions are additive
  Whether permissions are applied from a local group or domain group makes no difference
  Priorities for conflicting permissions are: User Deny > User Allow > Group Deny > Group Allow
    (user deny takes priority over user allow, which takes priority over group deny, which takes priority over group allow)
    basically, deny takes priority over allow, more specific (user) over less specific (group)
  There are also some less common options as well (domain computers, builtin\EVERYONE, CREATOR, etc)

- Windows Shared Folders
   similar to NTFS (read/write, local user/local group/domain user/domain group, allow/deny)
   These permissions do NOT overwrite NTFS permissions (when the contents being shared are stored on an NTFS volume).
     When using NTFS, user's access must be validated BOTH by the SHARE as well as NTFS

Design Goals
Within the Windows world, permissions have been a pain point for administrators for a LONG while. One of the goals of SharePoint was to SIMPLIFY the permission model, while still providing the necessary flexibility, and as much reusability as possible.

SharePoint's model is:
  - Many specific "base" permissions (View, Add, Edit, Delete, Approve)
  - Permission Levels create groups of the permissions ("Contributor" = View + Add + Edit + Delete)
  - Permissions are applied to a "SharePoint Principal"... which is just a fancy way of saying that it's one of the following:
    - User
    - External Group - since authentication can be handled by a number of different methods (Windows, MembershipProvider, Claims), this may be any of:
        - Windows local or domain group
        - Group as defined by the Membership Provider (ex: SQL Membership Provider can have custom groups, LDAP membership provider would use LDAP groups)
        - Group as defined by a claim rule using a Claims provider (ex: user with location "US", user with group includes "Human Resources")
    - SharePoint Group
        - can include Users and External Groups
  - Permissions use the GRANT model ONLY. There is no deny*

The model provides flexibility in two specific areas of its implementation

Permission Levels
By allowing base permissions to be grouped, the permission levels tend to be implemented as roles for the user to perform.
By default, SharePoint will create the following permission levels:
 - Read provides all of the permissions necessary to read/open/access the contents within SharePoint.
 - Contribute provides all of the permissions from Read, as well as those necessary to add, edit, or delete contents.
 - Approve provides all of the permissions from Contribute, as well as permission to approve or reject content.
 - Design provides all of the permissions from Approve, as well as permission to adjust the look of the site (colors, themes, etc)
 - Manage Hierarchy provides all of the permissions from Contribute, as well as permission to permission to add and delete lists, libraries, and subsites
 - Full Control provides everything.

Further, you can create your OWN permission levels. Examples that I've seen:
 - Submit Only (add, but not edit or delete)
 - Contribute Without Delete (basically Contribute, but excluding permission to delete items)

SharePoint Groups
If you ask a systems administrator how permissions should be applied, I would bet that 99% of the people will say "Active Directory Group". This answer is especially common in certain heavily regulated industries (healthcare, financial, government), often because a report of groups and their users are readily available for security teams and auditors.

While that answer has a certain amount of validity, it also places a burdon on the IT staff.

And unfortunately, there are a LOT of organizations that don't have the IT staff or budget neccessary to manage such an approach.

Sample Permission Analysis
So, from the File to the Permissions, SharePoint has to determine the following:
  File  ~  Each permission rule  ~  SharePoint Group (optional)  ~  External Group  ~  User  ~  Permission Level  ~  Base Permission

each separate area can (and often will) multiply the rows of data. For example:
  Example.PDF  ~  ACL #1  ~  Site Visitors  ~  DOMAIN\All Domain Users  ~  Jane Doe, John Smith  ~  Read  ~ ViewListItems, OpenItems, Open, ViewPages
  Example.PDF  ~  ACL #2  ~  Site Contributors  ~  DOMAIN\HR  ~  Peggy, Dan, Erin  ~  Contribute  ~  ViewListItems, AddListItems, EditListItems, ...
  Example.PDF  ~  ACL #3  ~  Site Owners  ~  Peggy, Dan  ~  Full Control  ~  all 36 permissions

so just this one file, having 3 sets of permissions (visitors, contributors, owners), can generate around 110 individual permissions (8 + 30 + 72) that SharePoint needs to evaluate.

Results vs Goals : Conclusion

By removing the ability to DENY permissions, the SharePoint permission model has successfully simplified much of the headaches that are present in NTFS.

By controlling access to the content through its APIs exclusively, the headaches of permission combinations with NTFS and Shared Folders was avoided entirely; further, by supporting other protocols (WebDAV), SharePoint provides a similar usability experience as Shared Folders with its unified security model.

By providing Permission Levels, the security model allows reuse of permissions and focuses on Roles, rather than individual permissions.

By providing SharePoint Groups, the use of groups can be implemented by smaller organizations (too small to manage permissions as AD groups), or by isolated teams where reuse is not necessary.

By adding the flexibility, the process of determining a users' permissions can require more lookups (SharePoint 2010 added the "Effective Permissions" button to aid in this process), and the additional relationships between Permission Levels and groups can be confusing, as well as the relationship between SharePoint Groups and Domain Groups.

Unfortunately, it's about as good as it can get. The simple truth is that in some ways, we live in a complex world, and developing a model that applies to such a large scope of users is not possible without some complexity. All in all, the permission model is only complex until you take the effort to understand it.

* there is ONE place that DENY can be used, and I won't say where because it SHOULDN'T be used.
- for those curious, SharePoint only asks the authentication provider what groups you're in... it's up to the provider to flatten any recursive group memberships... Windows does this automatically, though some providers like the SqlMembershipProvider don't support nested groups
- Technically, SharePoint doesn't need to evaluate each of the base permission levels, as they are implemented as bit flags that can be combined using rather trivial calculations. The net result is that instead of analyzing all 110 combinations, SharePoint would actually read 7 records (2 + 3 + 2), then flatten their permissions with the bitwise OR operation, and compare it to the operation being considered/attempted.
Disqus: disqus comments