Get count of objects in a directory with PowerShell

A quick post!

Today I started migrating content from an on-premises source to SharePoint Online using ShareGate’s Migration tool (AWESOME TOOL by the way!). I’m about 2 hours in to the migration and I was curious how much content I had to go. I need to get a count of all the objects in the directory. PowerShell to the rescue:

$path = "[paste directory path here]"
$files = gci $path -recurse | Measure-Object
$files.Count

You’ll get a count of all the objects in the directory. This number will help me gauge how long the migration will take. Happy migration!

Configure PowerPhone with Cisco phones

We’ve started rolling out PowerObjects’ PowerPhone. Users love the functionality and it’s lead to more CRM use overall.

We researched a few different CTI solutions. Getting it setup in CRM is a breeze, but when it comes to configuring things on user computers it’s actually harder than I originally thought. It took us about two months and too many hours to figure out. So I’m sharing the necessary steps so you don’t have to feel the same pain.

YESSSS

There are assumptions we have to make before we proceed:

Assumptions

  • You are on Cisco’s UCM (Unified Communications Manager) platform 9.0 or above.
    • UCM 9 supports Windows 7
    • UCM 10 and 10.5 support Windows 8.1
  • You have already downloaded the TAPI install files from UCM – (Note: there is no other way I know of to get the install files)
  • You have already uploaded the PowerPhone solution in your CRM instance and enabled it – found HERE.

Steps for installing the PowerPhone agent on user computers

  1. Double click the TAPI installation (be sure to choose the correct version – x86 vs x64). Note: Just “next” the whole way through
  2. If not installed, install .NET framework 4.5
  3. A restart may be required at this point
  4. Download the PowerPhone Agent from HERE
  5. Open the zip file and copy the entire “PowerPhone_1_3_2_5_agent” directory to the user’s C: drive
  6. Once copied, open the directory and pin the PowerPhone.exe file to the task bar. This will help the user when logging in to the phone.
  7. Log in to Unified Communications Manager
  8. Go to User Management > End User
  9. Search for the User. Once found open the end user configuration by clicking on their name
  10. Scroll to the bottom and click “Add to Access Control Group”Add to Access Control
  11. A pop-up will open, search for “Begins with ‘Standard CTI'”Choose Role
  12. Assign the user the “Standard CTI Enabled” role
  13. Hit Save on the End User Configuration record
  14. Go to Start > Launch TSP
  15. Right click the TAPI driver in the notification area and choose “Cisco TAPI configuration”
  16. Once open, make sure the instance is selected and choose “Configure”Configure
  17. On the User tab, have the user enter their Cisco username and password – if you have LDAP you would enter your AD credentials hereLogin
  18. On the CTI Manager tab, enter the IP address(es) of your Cisco serversIPs
  19. Make sure the user has PowerPhone rights in CRM before proceeding
  20. Open PowerPhone and configure the connection:PowerPhone1
  21. Select the phone line and add in the connection info to your CRM instance. If the phone line doesn’t appear in the dropdown then something hasn’t been configured properly with the TAPI driver.
  22. Once connected, click on settings and set the outgoing number (9 in our case)PowerPhone2
  23. Last step (at least for us) we had to make an outgoing call from PowerPhone so the TAPI driver will make the proper connection to the phone system. YMMV on this.

Powershell hack with IE

I wrote this because I wondered if it could be done. I figured it would be pretty difficult to come up with a Powershell hack with IE but as it turned out it took just a little bingling and this is what I came up with:

start 'http://www.inhifistereo.com'

Start-Sleep -s 5

Get-Process iexplore | Foreach-Object { $_.CloseMainWindow() }

exit

The script opens the browser, waits 5 seconds, then closes the page. Pretty simple I know but I’ll be expanding on this script more in the future. Some possibilities/uses could be:

  • Check a web page for content then send an email depending on what’s found
  • Keep VPN connection open
  • Check if a website is up

I went even farther as to add it to the Task Scheduler and have it run every 30 minutes. There are plenty of blog posts out there outlining how to create a Task in Task Scheduler, but the small hiccup I ran in to was how to run Powershell from the Scheduler.

When you get to the Actions tab add the path to Powershell:

C:WindowsSystem32WindowsPowerShellv1.0powershell.exe

The “Add arguments” section is where you add the command and path to your .ps1 file:

-Command “& [path to your file without brackets]”

The possibilities are pretty endless here. I don’t know if anyone else will find this useful but I know I will use it.

Add a clock to your web page

Had an interesting request to add a clock to a web page yesterday. Turned out to be a bit more difficult than we thought. We used moment.js for this project. It’s a VERY cool javascript library that helps you with time. If you’ve never done work with javascript and time consider yourself lucky. It’s downright hard. Moment.js takes the guesswork out – although there’s a little bit of a learning curve. Let’s get started.

First, add a div to your page and give it an id.

Now, we add our script in the footer of the html.

//call jquery and moment.js libraries
http://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js
    http://cdnjs.cloudflare.com/ajax/libs/moment.js/2.5.1/moment.min.js


//function to create clock functionality. updates every second.
        function timedUpdate() {
            showClock();
            setTimeout(timedUpdate, 1000);
        }
//moment.js function. get current time > pass to moment function > create html
        function showClock() {
            var now = new Date().getTime();
            var test = moment(now).format('MMMM Do YYYY, h:mm:ss a');

//create html
            $("#clock").html(test);
        }

//call update
        timedUpdate();
    

And then you get a nice clock to work with.
http://www.inhifistereo.com/wp-content/uploads/2014/02/Clock.html

PowerPivot using large SSRS ATOM feed fix

I’ve talked on, blogged about, and Tweeted about PowerPivot for awhile now. By itself, PowerPivot is pretty cool, but add it on to SharePoint and you have the Voltron of BI solutions.

PowerPivot Voltron

I can pull from Oracle, Excel, a CSV, and DB2 all in the same file. CRAZY! What’s even cooler is that I can pull a SSRS ATOM feed in to PowerPivot . . . sometimes.

By default – if you’re using SQL 2012 Integrated mode – the largest SSRS ATOM feed you can pull is 110 MB. We have some pretty ridiculously large reports at Trek and a user was attempting to pull one of these monsters in to PowerPivot and it was choking. To make matters worse, PowerPivot was just throwing a 500 error (Unknown error). Not really helpful…

I opened a MS Case and began troubleshooting. We went back and forth and back and forth (6 weeks!), but we finally found the solution. So to get PowerPivot using large SSRS ATOM feeds do the following:

  1. Open the file Client.Config on your front end servers. The File is located here: C:/Program Files/Common Files/microsoft shared/Web Server Extensions/15/WebClients/Reporting/client.config
  2. Search for “httpStreaming” and “httpsStreaming”
  3. Within both these Bindings, change the value of the following – this will increase the Data Size from 110 Mb to 1.1 Gb:
    1. maxReceivedMessageSize from “115343360” to “1153433600″
    2. maxStringContentLength from “115343360” to “1153433600″
    3. maxArrayLength from “115343360” to “1153433600″
  4. Save the File
  5. Do an IISRESET across all SharePoint Servers.

Happy PowerPivot’ing.

Share to Yammer button

Had an interesting request a few weeks ago. HR wanted to be able to share pages out to Yammer. I instantly ruled out a workflow because I couldn’t post as the user. Then ruled out a console app for 2 different reasons: the logic could get complicated and it didn’t allow for much flexibility. I went over to the Yammer Customer Network and started poking around. I ran across a few threads that talked about using the bookmarklet so I went and took a look at it: https://www.yammer.com/company/bookmarklet

The app – as designed – is aimed at users who want to share pages via their browser, not necessarily on a web page itself. I opened the developer toolbar and hashed through the code. I managed to find the JavaScript that snags the URL and opens the bookmarklet window. As Steve would say, “Talk is cheap, show me the code”:

<h2>Share this page . . . </h2>

The code will give you the share to yammer button:

Share it with Yammer

YAHTZEE!

Yes!

Once the user clicks the icon a new window will open and – as long as they’re logged in to Yammer – they’ll be able to create a new Yam with the URL of the page they want to share in the update’s body.

Bookmarklet Window example

Give it a shot and let me know what you think.

Azure Storage test drive

For the last year and a half I’ve been taking one class a semester at MATC in Madison. Having been trained as a Technical Writer, I’ve basically learned all this sysadmin stuff “on the job.” I figured it would be a good idea to fill-in-the-blanks for the stuff I didn’t learn yet. The classes require a external hard drive to house and manage VMs you use during labs and tests. Being 30 and having a full-time job allows me to buy really cool, really fast hardware to satisfy this class requirement. I opted for a 128GB Vertex 4. This thing SCREAMS. I get labs done in record time.

So how am I supposed to get my homework done if a spaghetti and meatball tornado comes through and wipes out the lower half of Wisconsin, taking my external hard drive with it?

TO THE CLOUD!

I’ve been using Azure at work for a variety of things so I figured I’d give this a try. I have 3 VMs and with them all zipped up (individually) I have about 16GB total to upload to Azure.

There are 3 Azure storage basics you need to know about: storage accounts, containers, and blobs. A storage account is the first thing you need in order to get started.

The storage account sets up the subdomain you’ll use to be able to communicate with your storage objects: yourstorage.*.core.windows.net. You also set the affinity group (location) where your content will be stored.

Once your storage account is up, you’ll need a container. Think of a container as a folder – only it’s not a folder – it’s a container. It holds your blobs – binary large object (i.e. your files). More on that in a bit.

Click Storage in the left-hand navigation

Click on the storage account name (my account is called “inhifistereo,” you can call your’s whatever you like)

Click containers at the top of the page > then click New at the bottom of the page


Now give the container a name and choose Public or Private

Private is just that; private. Meaning you have to be logged on (or have a Shared access key, but that’s fodder for another blog post) to access your stuff. A public container is cool because you can access it from anywhere as long as you have the URL. Click the checkmark and we’re good to go.

So a container is a container – like a folder, only it’s a container. And a blob is a file. The part that took me a second to understand is this storage isn’t like a fileshare up in the cloud. It’s the basic building blocks of storage in the cloud. A container dictates the access method, and a blob is the big ‘ole file that sits within the container.

Now to get content up to Azure. You could write a console app, use PowerShell, or a third-party tool. For this exercise, I opted for a third-party tool: . There are other tools too:

The files took me basically all day to upload. There were several reasons for this. For one, I have the most basic Broadband package Charter offers, but I’m not doing this for a living or every day so the time is no big deal. I’m charged by the GB not the minute, so if it took several days no biggie. But I’m not getting any younger…

Following this blog post, I did learn that the tools above do not upload in parallel, hence why it took so long.

“But David, you have a Skydrive and Dropbox account along with a hosting account. Why use Azure Storage?” Why not!? The real beauty of Azure storage is I only pay for what I use, and I pay pennies at that. Skydrive and Dropbox require a yearly commitment, and college classes only last 18 weeks. So when the class is over I can blow the container away and I don’t get charged anymore. I don’t plan on ever using these backups so they’re cheap insurance. Now having said that, Azure storage (and Amazon, and Google, etc.) aren’t really setup for consumer usage. But I’m not your typical consumer.

I’ll give PowerShell a shot next time and probably try Amazon as well to see if there are any performance differences. If I’m feeling really ambitious I may try doing a console app.

Price-wise, I’ve been charged a total of 15 US cents so far. I may have to go raid the couch cushions…

CRM + SharePoint * Excel Services = Epic Awesomeness

What’s the best way to get someone to eat their vegetables? Force feed them 😉

All kidding aside, CRM can be a pretty powerful tool and when people don’t want to use it we have to find creative ways to get them to use CRM. In addition to CRM usage, we have people who are absolutely married to Excel. And these aren’t your typical Excel files. These are files that legends are made of. Crazy formulas, vlookups galore; you name it, we use it. To make matters worse, they e-mail these Excel reports around all day, every day as attachments. So let’s kill two birds with one stone. We stop a big group of people from e-mailing Excel attachments and we get them to use CRM. Win-win for everyone (or at least that’s the hope).

  1. Save Excel file in an easy to find, easy to access place in SharePoint – doing this in SharePoint gives us all the doc mgmt benefits that we’ve come to know and love
  2. Configure your Excel REST API URL – I’ve made it pretty clear that I love SharePoint’s REST APIs in the past and the Excel API is no exception. You can read more about it here: http://msdn.microsoft.com/en-us/library/ee556413(v=office.14).aspx
  3. Create a new Web Resource in CRM – we’re going to iframe our Excel REST API URL call
  4. Choose Web Page (HTML) as the Type and then click “Text Editor”
  5. Click the Source tab
  6. Paste in your iframe code between the <body> tags. Your code should look like this:
    https://dude.com/sites/site1/_vti_bin/ExcelRest.aspx/Documents/Document.xlsx/Model/Ranges('Scorecard')?$format=html
    • Note the frameborder, height, and width attributes. These are needed to eliminate the nasty border and to make scrolling work correctly. iframe’s aren’t perfect and getting them to work feels “hacky” but the user won’t know the difference and it should perform relatively seamless in all browsers.
  7. Click Publish
  8. Now, navigate to the desired dashboard and add your new Web Resource, click Save, and Publish.

Users should now see the Excel spreadsheet in their dashboard:

If users do not have access to the spreadsheet they should encounter an Error:Access Denied prompt or a blank screen depending on the browser they use.

Extra Credit

In our case, the Excel spreadsheet scrolled FOREVER. I wanted to give users a pleasurable experience but I also didn’t necessarily want them resorting to Excel on the client right away. I added a “Click to View in a separate Window” link in the iframe Web resource. Here’s what my code looked like:

<p><a href="https://dude.crm.dynamics.com/WebResources/new_iframe2">Click to View in separate Window</a></p>https://dude.com/sites/site1/_vti_bin/ExcelRest.aspx/Documents/Document.xlsx/Model/Ranges('Scorecard')?$format=html

All HTML Web Resources are web pages, so I linked directly to the web page. But notice I linked to new_iframe2? I didn’t want users seeing “Click to View” on every page so I made an identical web resource, except I removed the hyperlink from the top, making for a seamless experience for the user. There’s all sorts of other things I could have done on the new_iframe2 page. I could have linked to the Excel Web Access or even directly to Excel itself, but we’ll leave it like that for now.

Ultimately, I’ve gotten the report builders to stop e-mailing this specific report as an attachment, and now the audience of the spreadsheet has to go to CRM to view it rather than getting it e-mailed to them. Awesome.

Obligatory SharePoint 2013 Search Powershell post

Maybe you’re still kicking the tires on SharePoint 2013 installing it for the first time. Maybe you’re in the throws of planning your migration. Maybe you’re a consultant who’s been stuck on a SharePoint 2010 project for the last 18 months. Either way, we all have to face the music sooner or later and upgrade to SharePoint 2013. When you do, you’ll have to setup a Search Service. It’s not as bad as you’d think. And if you have at least 3 servers in your farm (1 app and 2 WFEs) then this script will work for you. Without further delay:

#Config Section
$APP1 = "App1"
$WFE1 = "WFE1"
$WFE2 = "WFE1"
$SearchAppPoolName = "SearchServiceAppPool"
$SearchAppPoolAccountName = "domainSearchSvc"
$SearchServiceName = "SharePoint Search Service"
$SearchServiceProxyName = "SharePoint Search Service Proxy"
$DatabaseServer = "DBserver"
$DatabaseName = "SP_Search_AdminDB" 

#Create a Search Service Application Pool
$spAppPool = New-SPServiceApplicationPool -Name $SearchAppPoolName -Account $SearchAppPoolAccountName -Verbose 

#Start Search Service Instance on all Application Server
Start-SPEnterpriseSearchServiceInstance $App1 -ErrorAction SilentlyContinue
Start-SPEnterpriseSearchQueryAndSiteSettingsServiceInstance $App1 -ErrorAction SilentlyContinue

#Start Search Service Instance on WFEs
Start-SPEnterpriseSearchServiceInstance $WFE1 -ErrorAction SilentlyContinue
Start-SPEnterpriseSearchServiceInstance $WFE2 -ErrorAction SilentlyContinue

#Create Search Service Application
$ServiceApplication = New-SPEnterpriseSearchServiceApplication -Partitioned -Name $SearchServiceName -ApplicationPool $spAppPool.Name -DatabaseServer $DatabaseServer -DatabaseName $DatabaseName 

#Create Search Service Proxy
New-SPEnterpriseSearchServiceApplicationProxy -Partitioned -Name $SearchServiceProxyName -SearchApplication $ServiceApplication
$clone = $ServiceApplication.ActiveTopology.Clone()

#Set variables for component creation
$App1SSI = Get-SPEnterpriseSearchServiceInstance -Identity $App1
$WFE1SSI = Get-SPEnterpriseSearchServiceInstance -Identity $WFE1
$WFE2SSI = Get-SPEnterpriseSearchServiceInstance -Identity $WFE2

#Create Admin component
New-SPEnterpriseSearchAdminComponent –SearchTopology $clone -SearchServiceInstance $App1SSI 

#Create Processing component
New-SPEnterpriseSearchContentProcessingComponent –SearchTopology $clone -SearchServiceInstance $App1SSI

#Create Analytics Processing component
New-SPEnterpriseSearchAnalyticsProcessingComponent –SearchTopology $clone -SearchServiceInstance $App1SSI

#Create Crawl component
New-SPEnterpriseSearchCrawlComponent –SearchTopology $clone -SearchServiceInstance $App1SSI

#Create query processing component
New-SPEnterpriseSearchQueryProcessingComponent –SearchTopology $clone -SearchServiceInstance $App1SSI

#Set the primary and replica index location; ensure these drives and folders exist on application servers
$PrimaryIndexLocation = "C:SPSearch"
$ReplicaIndexLocation = "C:SPSearchReplica" 

#We need two index partitions and replicas for each partition. Follow the sequence.
New-SPEnterpriseSearchIndexComponent –SearchTopology $clone -SearchServiceInstance $WFE1SSI -RootDirectory $PrimaryIndexLocation -IndexPartition 0
New-SPEnterpriseSearchIndexComponent –SearchTopology $clone -SearchServiceInstance $WFE2SSI -RootDirectory $ReplicaIndexLocation -IndexPartition 0
New-SPEnterpriseSearchIndexComponent –SearchTopology $clone -SearchServiceInstance $WFE1SSI -RootDirectory $PrimaryIndexLocation -IndexPartition 1
New-SPEnterpriseSearchIndexComponent –SearchTopology $clone -SearchServiceInstance $WFE2SSI -RootDirectory $ReplicaIndexLocation -IndexPartition 1

$clone.Activate()

#Verify Search Topology
$ssa = Get-SPEnterpriseSearchServiceApplication
Get-SPEnterpriseSearchTopology -Active -SearchApplication $ssa

This script is actually pretty basic. 90% of the services end up on your App box, while the Index partitions live on your WFEs. You could provision the service on one box or any combination you see fit. That’s the beauty of this model. For me, I like having the Index partition closest to where people will be searching (i.e. the WFEs).

The script should take anywhere from 10-30 minutes to run, maybe longer depending on your hardware. Once done, navigate to your Search Service in Central Admin and this is what you should see in the Search Topology.

Most important thing to remember when using this script is to create the C:SPSearch and C:SPSearchReplica directories on your WFEs PRIOR to running this script. The script will fail if you don’t do this and it’s a pain to cleanup after so create the directories first. I’ll probably write in a check to see if the directories exist – and if they don’t go, ahead and create them – next time I run this script when setting up an environment.

Delete user in asp.net membership provider

I have a SharePoint farm where I use the aspnet membership provider. Now and again I need to remove users due to separations, change of job, etc. so they can’t access SharePoint.

Like all of you I have to research this thing anew every time I have to do this. But no more! My future self will thank me.

Using the aspnet_Users_DeleteUser stored proc we can remove users via SSMS.

USE [database name]

EXEC [dbo].[aspnet_Users_DeleteUser]
@ApplicationName = '[Application Name]',
@UserName = '[Username]',
@TablesToDeleteFrom = 15,
@NumTablesDeletedFrom = 0

GO

@TablesToDeleteFrom can be a bit confusing when you open the stored proc up. It’s actually a bit mask. I typically have to remove users entirely so I use 15, but this blog post details out some additional options for that parameter: http://vsproblemssolved.blogspot.com/2007/01/using-sqlmembershipprovider.html

@NumTablesDeletedFrom is an input/output variable, meaning anything you put in there will get replaced by 0 in the query. You can use that parameter to inspect the output but I’m not that ambitious.