Recovering from Lost Workflow Server in SharePoint 2013

Recently one of my client’s main app servers in production went belly up. We believe there was some corruption in the VM image. No matter the reason it was not a pretty situation to be in. We had to rebuild the machine. As part of this it was decided to re-install the workflow engine and the previous database associated with the engine was not used. First Problem. So the workflow engine was installed and started from scratch. And from testing everything seemed to be working fine, granted testing involved creating a new workflow in SharePoint Designer to make sure it recognized that the scope and engine was configured.

Here is where we ran into a problem. We had a custom workflow definition that we were deploying to the different site collections that were created for different groups. After the recovery was done we started receiving reports from users that the workflows would throw errors and not start. I did some digging in the ULS and found that the workflow engine would throw one of two errors. It would either throw a scope not found exception or a workflow not found exception both with a root error of a 404 coming from the workflow engine (more on this later). So I deactivated the feature that added the workflow and then let the web part that started the workflow handle the activation and setup. The workflow would still fail. Continue reading “Recovering from Lost Workflow Server in SharePoint 2013”

ITG becomes ZAACT

Update(1/20/2016): The company put up their own blog post ITG Becomes ZAACT. In their post they added where the name came from. I think I may still have fun with my Joker moments though. Time will tell.

The company that I work for recently decided, for numerous reasons, to rename itself. Most of these reasons boil down to the fact that using an acronym as your main company name makes it easy to have conflicts. There are several ITGs across the United States, and this can cause problems with brand recognition. The renaming process has been a long one. It is a challenge to try and find a new name for an existing company because all the employees have an interest in what it should be called. After several surveys and some really bad names, thrown out early on, ITG had a reveal party last week where the new name was announced to employees and partners. Continue reading “ITG becomes ZAACT”

PowerShell Script to Transfer Search Configuration

I was working with a client a little while back and we needed an easy way to setup search on the dev and test SharePoint environments. What I came up with to resolve that issue was the following script that will take a source and destination URL and it will transfer the search settings from the one environment to the other. I am publishing this with the consent of and at the request of this client because he thought it was cool enough to share. I have not used this in a while but I do remember that there were some interesting error messages that came up when trying to migrate the schema because of duplicate properties but there is an option to exclude that if you are getting those errors. Enjoy.

I tried embedding the scriptlet itself but some of the xml was getting lost so here is a zip of the ps1 file.


SharePoint Fest Seattle 2015

I have been invited to present the following sessions at SharePoint Fest Seattle 2015 August 18-20, 2015. I look forward to meeting with everyone and sharing this information.

SharePoint Fest Seattle 2015 Session Abstracts

DEV 203 – SharePoint 2013 and AngularJS Lessons Learned from the Trenches

Thanks to SharePoint 2013 it is a lot easier to be able to create client based solutions. AngularJS is one of the solutions out there that allows developers to create applications very quickly through an MVC implementation.

In this session we will go over the basics of AngularJS in regards to apps, controllers, services, and views. We will create a simple application built entirely through client technologies and in SharePoint. The principles learned here can be used anywhere in SharePoint whether it is a Farm Solution in an on-premise environment or in an app on Office 365.

Come learn a lot of the pitfalls to avoid and some best practices that will help you love AngularJS as another tool in a developers tool belt.

SIA 105 – PowerShell to the Rescue, For Developers and Admins

As a developer PowerShell was one of those tools that had to be used to deploy things to SharePoint, but it is so much more. PowerShell allows the easy creation of simple or complex scripts to accomplish so many things. PowerShell can even be used to manage your SharePoint environment remotely.

In this session we will cover the basics of PowerShell and then we will dive into some concepts that will be beneficial for anyone that works with the SharePoint backend. We will cover how to use PowerShell to automate both on premise and Office 365 SharePoint environments.

Announcing Release of AngularSP to Beta

I am announcing the initial release of my first open source project, AngularSP. I have been working more and more with AngularJS and I love it. AngularJS has saved me so much development time. The problem I saw was that every project I worked on, I had to embed/re-create the connections with the SharePoint REST services. I kept thinking, I really should just do a service similar to $http to encapsulate the functionality. So here is the initial release of the factory. Currently the library handles all of the CRUD operations on SharePoint lists hosted within the same domain as the calling page(no cross site calls as of right now, that will come later). The library will work with On-Premise SharePoint as well as SharePoint Online in Office 365. The plan is to add more functionality as time goes on.

The current priorities are listed on the CodeProject site but I am including them here as well for now.

  1. Automatic handling of Request Digest Token for REST Implemented
  2. Search Calls
  3. All implemented methods for the request executor.

Let me know if there are any features that you would like to see added.

AngularSP on CodePlex

SharePoint Server 2013 – Repair Install Error Message in Russian

Here is an interesting issue that I ran into recently. It took me a while realize what was going on so I figured I would share so others may benefit. I had a client where the SharePoint search died in dev. In the process of trying to get this back up someone had mentioned to just do a repair on the SharePoint install from Control Panel/Programs and Features. So I tried this and I got a really fun error message. Included above. Now it looks to me like this error message was in Russian. So I am asking myself, why is my environment giving me error messages in Russian. So I ran a psconfig and that completed just fine so I ran the repair again and got the same error message.

This morning I showed this to a coworker who speaks Russian and he confirmed that it was indeed Russian. Then the light came on, I can use Bing Translator on my phone to get the real error message. After doing this the resolution was simple. Here is the output of the translation from my phone.
SharePoint Error Russian Translation
Well this error message makes a lot more sense. So I went and downloaded the SharePoint 2013 Install with SP1 and when I ran the install it gave me the same options as the change from the programs and features. I selected repair and it started working. Interestingly the error message says to check the setup.chm file. So I went that as part of my searching. Well that file didn’t exist. So it is not helpful. The folder is there until you click close, then it goes away as well.

So if you see this error message in Russian with the number 1706. Go get the install files and try again.

Thank you Bing Translator!!

Best of Luck.

So Long Public Sites

  Office 365 Public Sites are one of those features that I have mixed feelings about. It was an awesome selling point with client to be able to say, “Buy Office 365 and you get your public site for free.” At ITG we have several clients where we have setup their public site and for the most part it works, as long as you are not expecting full SharePoint functionality. At ITG we have had several discussions about what the best approach would be for our customers that a full ECM solution for their public site since the Office 365 public site didn’t give them that. Well we don’t have to make that discussion anymore, thank you Microsoft.

Continue reading “So Long Public Sites”

What is the WSS_Logging Database?


What is the WSS_Logging database?

I was recently at the SharePoint Saturday in Bend and one of the people that attended my session asked a question afterwards that I answered, but I don’t think I gave the response enough justice. The question I was asked was “What is the WSS_Logging database?” After talking with him the underlying problem presented itself. He was having problems because his WSS_Logging was getting rather large. So I explained to him what I thought the database was used for. In general terms it is used for gathering ULS data for the farm. Which if you think about it, this of itself is pretty cool. If you have a 10 server farm you have one location to check for any errors. But this database does so much more. Not only does it aggregate the ULS data it also contains the server event log information as well. The health analyzer tool in Central Admin also uses this database to warn you of problems. In SharePoint 2010 the default name for this database is WSS_Logging in SharePoint 2013 the default is SharePoint_Logging. Other than that the database and information is pretty much identical. In SharePoint 2013 a lot of the usage reports are now part of the Search Databases however.


How does this database get its data?

This database gets its data from SharePoint Timer Jobs that run on all the servers in the farm. The data for each type of item is “Partitioned” into tables for the day in question. These partition tables are generated based of the log providers that have been setup. You can control what data gets into this database and on what schedule it gets added from Central Admin -> Monitoring – Configure Usage and Health Data Collection. From here collection can be turned on and off and you can control what items are pulled into the tables.


As you can see there are plenty of options that you can select and they are all selected by default. Further down on the screen there is an option that will you to configure the schedule of the data collection. The link is called “Log Collection Schedule”. If you click on this link you will be taken to the two jobs that are used for logging. They are:

  • Microsoft SharePoint Foundation Usage Data Import – Imports the data into the logging database, defaults to running every 30 minutes.
  • Microsoft SharePoint Foundation Usage Data Processing- Handles the processing to move the data into the correct buckets, runs daily.

For each of the events that you select to log you will get tables that correspond. Included below is a snapshot of the tables created for ULS Logging.


These tables continue through 31. This means you could have data for the last 31 days. The processing job makes sure that daily the events that are tracked are in the correct table for the day.


What can I do with this Database?

Microsoft’s TechNet article on this database has an interesting note, “the logging database is the only SharePoint 2013 database for which you can customize reports by directly changing the database.” ( Most of the time we are told never touch the SharePoint databases but this is one exception to the rule. Now we are still limited in what we can do with it but at least we won’t get our hands slapped if we look at it. To change the reports we can create SQL views if we want a different view of the report data that is already logged or we can create a custom log provider that will allow us to add new data.


Now What?

Now for the million dollar question, how do we control how big this database has become. The simplest option would be to limit the event types that are tracked. This could be done in two ways, by removing the event type all together or going into the Configure Diagnostic Logging screen and changing the logging level. ULS logging can take up a lot of space on the file system and if you multiply this space by the number of servers that are in the farm you can see how this can get out of control really quickly. Limiting what is logged will not immediately fix your database size since the old data will be removed automatically after it is too old.

Another option is to look at which table it is in the database that is taking up the space and you get adjust how long the data is retained in the database. The tables allows us to retain up to 31 days worth of data but the default is 14. To check what you retention level is run the following powershell script: Get-SPUsageDefinition.

If you run this script on a default install that has logging enabled will give you the following output.


From PowerShell you can change the Retention Period for each individual Event Type by using the Set-SPUsageDefinition commandlet. This commandlet lets us set the retention days and whether an event type is enabled or not. So if we wanted to set the retention period for Timer Jobs to one week instead of two we can run Set-SPUsageDeffinition -Identity “Timer Jobs” -DaysRetained 7. After this is done go to both of the timer jobs mentioned above and run them. This will flush the data that is older than the retention period out of the database. So in this scenario any timer job data older than 7 days will be removed.