Skip Ribbon Commands
Skip to main content

House of Windows

:

Posts :Export ViewUse SHIFT+ENTER to open the menu (new window).Open MenuOpen Menu

Use the Posts list for posts in this blog.
  
Body
Category
  
  
  
  
So we've had this ILM instance over in our eval/staging forest for many years. It's a VM, and it has a bit of a sordid history having become a VM via a somewhat checkered path. This ILM instance is where I test out changes before deploying them. But for several reasons I won't go into here, this ILM instance has never automatically run the usual cycle as the production ILM instance does.

But I'd like it to now, so I went through all the steps to do that. In a nutshell, this means having a script which will call the MicrosoftIdentityIntegrationServer WMI provider and execute the right management agent run profiles. This script is triggered by a scheduled task which is recurring. This is the standard approach everyone uses, because ILM has never had it's own automation. You can see an example of the script code used to do this here: http://msdn.microsoft.com/en-us/library/windows/desktop/ms697765(v=vs.85).aspx

I used the same script I was successfully executing with my admin account, but set it up to run with a service account. The service account was, of course, in all the right ILM groups.

But the scheduled task wouldn't run properly.

So then I tried via a runas command prompt as the service account. And thus began a long saga of learning obscure things.

The first error was clearly a DCOM permission issue. And so I manually fixed the DCOM permissions, although that would have been fixed by an ILM reinstall.

Then I ran into a more obscure error:
Event Type: Error
Event Source: MIIServer
Event Category: Security 
Event ID: 6600
Date: 8/30/2012
Time: 3:15:00 PM
User: N/A
Computer: KIBBLESNBITS
Description:
A user was denied access for an operation.
 
 Additional Details
 User: "DOGFOOD\a_ilm-acct" Operation attempted:"CServer::GetEnumInterface

With an error code from the script of: 0x80041001. That translates to one of two possible causes: REC_E_NOCALLBACK or WBEM_E_FAILED. If I had known more about WMI at this point, I might have saved myself some time.

Reinstalling ILM when you run into an odd problem such as this is a common recommendation, and that's what I tried first, after verifying the account was in all the right groups. But reinstalling ILM didn't help.

My next theory was that perhaps SQL permissions weren't correct. This installation of ILM was using SQL Express, so I had to download the free sql server management studio express to check out the permissions. SQL permissions were fine.

Then I started looking more closely at exactly which line of the script was raising an exception.

This led to realizing that something about WMI wasn't happy. I'm not really a WMI expert.

I found a thread online with other folks who had run into a similar problem: http://social.technet.microsoft.com/Forums/en-US/identitylifecyclemanager/thread/2a4c8f42-4123-4297-aa35-29a96956946e/.

So I tried the WMI repository fix (basically deleting it). That didn't fix the problem and actually seemed to make it worse, as now the script couldn't even find the MicrosoftIdentityIntegrationServer WMI provider/namespace.

But then I realized that I might need to re-register that provider/namespace afresh, and I tried Yorick's fix in that thread, i.e. running "mofcomp mmswmi.mof".

And then the script worked for my service account.

I suspect that if I had looked at the WMI permissions instead of wiping the WMI repository, I would have found that the service account didn't have permissions, but it's also possible that there was some kind of WMI corruption present.
UW Infrastructure; Engineering8/31/2012 11:38 AMBrian Arkills
  

Over the past month, Microsoft has been more forthcoming about some of the massive investment they've been making in the form of their cloud-based Active Directory. I heard a bit about this a month ago at The Experts Conference from the Microsoft Identity GM Uday Hegde, most notably with a focus on the market trends that Microsoft is reacting to and anticipating with this. Here are a plethora of links to read more about what Microsoft is up to:

 

John Shewchuk's blog post on Reimagining Active Directory for the Social Enterprise (part 1):

http://blogs.msdn.com/b/windowsazure/archive/2012/05/23/reimagining-active-directory-for-the-social-enterprise-part-1.aspx

 

Kim Cameron's blog post on IDMaaS (Identity Management as a service):

http://www.identityblog.com/?p=1205

 

Mary Jo Foley and John Fontana's Take:

http://www.zdnet.com/blog/microsoft/microsoft-finally-goes-public-with-windows-azure-active-directory-details/12795

http://www.zdnet.com/blog/identity/microsoft-unveils-ad-azure-strategy-id-management-reset/507

5/31/2012 8:27 AMBrian Arkills
  

​In the past I've provided a custom OUTLOOK.HOL file to add the UW Holidays to Outlook.

You can read https://sharepoint.washington.edu/windows/Lists/Posts/Post.aspx?ID=98 for the first post where I mentioned this mechanism.
 
I've just updated the Outlook.hol file noted in that post to include the 2012 & 2013 UW Holidays.

Note that this newly updated OUTLOOK.HOL file is based on the stock Outlook 2010 file with just the UW 2012 & 2013 dates added. You'll be closing Outlook, replacing the stock file at C:\Program Files (x86)\Microsoft Office\Office14\1033 with this new copy (to be safe, make a copy of your stock file first), then via the File toolbar, Options, Calendar, Add Holidays..., Uncheck United States and Check UW Holidays 2012 and 2013.

Finally, I note that Outlook 2010 has an internet calendar subscription feature, where you can subscribe to a web-based calendar. I don't know how well this feature works, or if there's an iCal formatted calendar for the UW holidays that's maintained somewhere already, but it's something worth considering as a possible better solution here.

Enjoy!

Exchange1/4/2012 8:58 AMBrian Arkills
  

So I finally got around to looking more closely at the Windows Server 8 bits given out at Build. One of the first things I do when I look at a new MS server OS is look at the AD schema files. I'm sure that sounds pretty geeky and tedious to others, but I find that it gives me a really quick overview of most of the new features and some of the details associated with them, so I know what to look for (and what to hope for). Sometimes I find gems that I really like but which don't make it into slide decks or announcements because they aren't flashy enough or didn't require much MS investment--and there are plenty of examples of that in this new set of schema files.

 Of course, this OS is not even at beta level yet, so it's hard to know what will be cut or added, but even so, it paints an interesting picture. And I figure this info is of broad interest, so here are my notes from perusing the new schema files:

 

  • *Lots* of stuff to support CBAC, including cross-forest
  • Stuff to support integrating KMS with AD (called KDS), especially storing various keys in AD, but also storing configuration settings. Details here may have implications on those of us who run KMS (especially if that KMS serves more than one forest).
  • Stuff to represent TPM as a new object type, link to computer objects, and other new TPM functionality. Maybe a refactor of how they do AD integrated bitlocker--or maybe this is only to support the new UEFI boot which I heard they had to make some changes to support.
  • Stuff to support DNS zone signing, including storage of NSEC keys in AD, and storage of some DNS settings in AD. Haven't heard of this yet.
  • Stuff to help with DC virtualization, including an attribute to support VM snapshot resumption detection and a controlAccessRight that to allow a DC to clone itself.
  • An attribute to support 'act on behalf of' access checks. Might be only for CBAC, but maybe it's to help with cross-forest Kerberos delegation. Unclear.
  • Stuff to support scan repository/secure print devices. Looks like it's trying to make it easy for a vendor to design their product to join AD and store certs there to secure their device. Hadn't heard of this yet.
  • An attribute (and backlink) to store whether a computer is a user (or group's) primary computer. Haven't heard of this yet. Nice gem. :)
  • A controlAccessRight for "validated write to MS DS additional DNS Host Name". Not sure what this is yet.
  • Stuff to support managed password data for a group managed service account, including what looks like a custom access control mechanism.
  • New attributes to store geo coordinates in, including longitude, latitude, and altitude. Haven't heard of this yet.

Enjoy!​

Engineering9/27/2011 1:55 PMBrian Arkills
  

As part of the Office 365 project, UW-IT is deploying ADFS. This will enable federated logon to Office 365 services, and as time allows it can be used in other scenarios to federate other applications on the Windows platform.

I've talked and read a lot about ADFS, and listened to perhaps a dozen presentations on it, but haven't had time to look at
it myself until this project came along.
 

 

My initial experiences are well summarized by a statement on a Virginia Tech blog: "Along comes ADFS 2... new, more powerful,
better, more standards compliant ... documentation reads like scrawl on a napkin."

 

I ran into a lot of "gotchas" most of which are noted on various websites, but very little of which is well-addressed by the Micrsoft documentation or tools. I don't blog much about problems I run into, but this set of gotchas calls out for a blog post. :)
 
Gotcha #1:
Don't use the WS2008 Roles feature to install ADFS. You'd think you'd just install the ADFS role via Server Manager, right?
 
Wrong.
 
If you do this, you end up with ADFS 1.0. Nothing in Server Manager indicates the version number. Nothing tips you off in the interface until you read documentation somewhere which tells you you have to download the ADFS 2.0 installer and this is the only way to get 2.0 installed.
Gotcha #2:
If you install ADFS 1.0, you can't install 2.0 without removing 1.0 first. No upgrade. Is this Microsoft software? OK, I can deal with this.

Gotcha #3:
If you use the ADFS GUI installer, you can't use SQL Standard (or enterprise/datacenter). The UI at least gives you a small

 

warning note about this. However, this part of the UI and the Microsoft documentation are also a source of the next gotcha.

 


Gotcha #4:
The MS documentation uses the term "Windows Internal Database (WID)" consistently for the "small" ADFS cluster deployment option. Nowhere does it refer to SQL Express. The ADFS installer UI never uses the WID term, and only refers to SQL Express. Until you realize that WID is a SQL Express based database, you are confused. Maybe this terminology mismatch only confused
me ...

Gotcha #5:
The fact that the WID option doesn't support SAML artifact functionality isn't well documented or presented. If you choose WID you are sacrificing some functionality. That's an important point and shouldn't be hidden.

Gotcha #6:
If you don't want to use SQL Express, then you have to use a command line utility called fsconfig.exe to generate the SQL database for ADFS. The Microsoft documentation on this is ... sparse? Almost non-existent? And what there is doesn't really highlight the best way to approach this. Run:

 

Fsconfig.exe GenerateSQLScripts /ServiceAccount [account] /ScriptDestinationFolder [destination folder]

 

 
 
 
which will generate 2 .sql scripts which will do the work required on the desired SQL server.

Gotcha #7:
The commonly recognized best practice for the token-signing certificate is to use a self-signed certificate. This is because self-signed certs automatically renew themselves, whereas replacing a CA signed cert can result in an ADFS outage or require you to notify all trust partners. But the installation UI doesn't default to a self-signed cert for the token-signing cert. Good thing I've listened to so many ADFS presentations, and talked with folks who have been running ADFS for awhile.

Gotcha #8:
ADFS wants port 1501 for some of its communications. The TSM Client Scheduler service uses port 1501. This blocks installation of ADFS. You can get past this and still run TSM, but it requires stopping the TSM client during installation, then later changing the port that ADFS wants to use. See http://social.technet.microsoft.com/wiki/contents/articles/ad-fs-2-0-how-
to-change-the-net-tcp-ports-for-services-and-administration.aspx for how to change that port.
 
More gotchas? Maybe. I've only installed one ADFS server in our dogfood environment. I'm sure I'll run into more shenanigans.
Engineering9/21/2011 11:56 AMBrian Arkills
  

There wasn't really a lot of new news today--it was mostly a day for catching up with what had been unleashed the prior 2 days and drilling deeper.

Today I found someone to talk to about Microsoft's event messaging bus technology and also spent some time skimming a hands-on developer lab on the topic. Microsoft calls this Azure Service Bus, but it's also sometimes referred to as AppFabric. This technology enables 2 apps/services to have a 3rd party mediate messages between them--or what's called loosely coupled interaction. That 3rd party specializes in messaging, which improves the resiliency, and also removes issues caused by timing dependencies. UW-IT has been working with the ActiveMQ stack, which is what Amazon uses to drive their marketplace. But the idea of moving this kind of functionality to the cloud is intriguing for a variety of reasons. From what I can tell, this MS technology appears sound. It supports any authentication provider that Azure will support, all communication over the wire is encrypted via https, the technology supports both queues and topics (filtered views of a queue), and in addition to the .Net framework support, it has a RESTful API which means it's a cross-platform player. And this cross-platform support is a big deal for this technology. So technically this looks viable. Whether it makes more sense than an on-prem solution remains to be seen.
 
Windows To Go was one of the few things that was new today--nothing on it had been presented previously. With Windows 8, Windows now supports booting from a USB drive. This is a pretty significant development, but it's the details that make it significant. A Windows To Go USB drive can be used on any x64 hardware and roam between them. Windows will do hardware detection and drive installation the first time it boots any particular computer, but after that initial boot, all subsequent boots are speedy and without interruption. The USB drive is used for all the usual Windows boot drive stuff including the paging file, and for this reason Microsoft has some strong recommendations about the specs. Bitlocker is supported by this new feature, but it sounded like there were some restrictions.
 
In the short term, Microsoft sees this enabling a variety of scenarios:
  • Managed workstation builds from work running on home/personal hardware
  • "Roaming" workers that use drop-in spaces. Or re-imagined in the University setting, students that use drop-in labs

But in the longer-term, with USB3 performance, this may replace traditional hard drives. For example, you can imagine how much easier it'd be to upgrade a user's operating system by just handing them a new USB flash drive.

If you do remove the USB drive while Windows is running, the OS is immediately suspended. If you replace the drive within 60 seconds, the OS resumes where it suspended without any loss. But after 60 seconds, the OS shuts down. And depending on what was happening at the time of removal, there is the possibility of corruption on the drive.

Microsoft was unwilling or unable to discuss the licensing implications of this development today--but with our campus agreement that's not really a big deal.

Oh ... and yes, you can use this with virtuals, but only if your virtual technology supports boot from USB. HyperV now does. Virtual PC and most others don't. Which reminds me. HyperV now runs on the Windows 8 desktop--I don't think I've mentioned that before. Oops.

 

On the developer side of the house, there's been a lot of confusion around the new architecture for the various languages on Windows 8. There seems to be some clarity emerging, which this post does a good job of explaining: http://t.co/7cIWykkK.

As a follow-up on the dynamic ACL functionality, today I verified that AD doesn't support the new "additional conditions". I talked with a Principal Program Manager Lead in the Microsoft Identity & Access team and learned that Microsoft has no plans for Windows 8 to extend the dynamic ACL features beyond the file server. I explained the Education sector's needs here, and he seemed to get that we're stuck between a rock and a hard place, so hopefully something will come of that.​

 

Engineering9/15/2011 8:57 PMBrian Arkills
  

Lots of technical feature goodies for developers today. Visual Studio 11 has lots of new features, including performance improvements. A few other features:

  • Metro apps have two methods for deployment, either to MS App Store (which has a transparent approval process logged in VS) or as a very simple new package type. 
  • Simulator feature in new visual studio uses RDP to local host with false resolution & dpi to allow simulation in all graphics forms.
  • Awesome new feature in visual studio: find clones. Finds code that is a copy & paste ignoring variable names--just looking at the syntax tree.
  • All the new WinRT APIs are reflected, enabling dynamic languages out of box.
  • Agile & scrum support in new tfs and tfs 2010 update.
  • TFS in cloud or on-premise
Lots of Windows Server 8 (WS8) details revealed today:
  • HyperV live migration *without* shared storage and no interruption!
  • Virtual IPs and switches in WS8 HyperV
  • WS8 is by default server core--no GUI. As a result, all WS8 apps can't have a GUI element.
  • 640 logical processors with ws8 and support for 4TB memory
  • Dynamic access control feature. Uses AD based policy + expression based ACLs on the resource server + user and/or computer claims. This layers on top of existing acls--like NTFS ACLs layer on top of Share ACLs. I haven't yet seen exactly where this can be used except for file services. Will find out more today.
  • Another feature here is an automated file classification engine which uses text matching and together with the expression based ACLs can restrict access. So for example, you can define a regex that'll find social security numbers in files, which will result in those files getting a metadata label, which then triggers the dynamic access controls.
  • Metadata labels can also be applied to files manually, either on a single basis or via folder inheritance.
  • Another related feature here is a redesigned "Access Denied" experience. Admins can enable an interactive and customized "Access Denied" message, giving the end user custom info about what to do, and even send email from the error message.
  • Windows management framework is now the primary method of communication with remote servers as opposed to DOM.
  • Workflow support in powershell. Allows easier automation of management tasks across many servers.

Across WS8, there's a theme of enabling the customer to run private clouds more easily.

They apparently gave attendees WS8 bits yesterday--I haven't had a chance to look at them yet, nor have I had a chance to see if they are downloadable.
 
Other bits from yesterday:
  • Apps in the Metro UI on Win8 don't have a way to be closed. This is by design for a couple reasons:
    • Apps go into a suspended state within 5-10 seconds after you switch away from them. Because of this all Metro apps should automatically be saving user state.
    • Running/suspended apps can have a different tile experience than non-running apps--sharing interesting info with the desktop. This makes the Metro "desktop" appear dynamic with for example facebook content or other app specific data being exposed.
  • Azure Service Bus messaging is a big thrust from MS. Allows loosely coupled interaction. UW-IT is doing some message bus work currently with ActiveMQ.
More when I have it. :)
Engineering9/15/2011 7:32 AMBrian Arkills
  

Spent most of last night playing with Windows 8 on this: http://laptoport.com/2011/09/04/samsung-announces-its-11-6-inch-windows-7-tablet-the-series-7-slate-pc/.

Key things picked up from horsing around and listening to the BUILD keynotes:
 
All Win7 Apps will continue to work on Win8. Some Win7 Apps will need some changes to run in the Metro UI. For example, overlapping windows (a window on top of another window) is not allowed. And any IO/wait operation that could take more than 50ms must be done asynchronously.
 
Microsoft is pretty serious about performance on Win8. They reduced the OS memory footprint by almost half. And it takes literally seconds to boot.

 
 

I was pleasantly surprised by Metro and the touch screen interface. This is subjective, but of all the UI changes I've seen over the years, this seems like one of the best. And you can always drop back to the old UI--or just not use the Metro UI. The design principals behind Metro are sensible, focusing on performance, content, and simplified interaction. And getting almost all of the screen real-estate back from the OS is nice. And it looks like Microsoft will be applying their app design principals to their own apps--IE10 takes even less real estate than IE9 which was pretty darn slim.
 
There's a strong emphasis on Microsoft Live interconnectivity with Win8. All Metro and Metro App user settings can roam between devices, which is accomplished because all Metro apps get a small amount of per-user space in SkyDrive. So, for example, you'd never need to re-complete an Angry Bird game level because you were using a different Win8 device than you last played the game on. ;) I haven't heard anything clarifying how the Microsoft Live ties might relate to Office 365 and the MOSS ID underlying Office 365 as opposed to the consumer Microsoft Live services and Live ID--everything so far has been around the consumer Live ID. This worries me a bit but I'll probably find out more as the week progresses.
 
On the developer side of the house, Microsoft has made some significant contributions to bring the varied language environments to an equal footing. They've made a key functionality investment in app to app sharing, allowing one app to share data with another app, without any prior knowledge of each other. This is a significant development. Lots more here ...​
Engineering9/15/2011 7:30 AMBrian Arkills
  

Back on 5/11/11, we made a minor change to the delegated OU permissions to ensure existing policy was enforced on object creations. This was intended to close a loophole, and we didn't anticipate any impact to customers. But we ran into two customer impacts.

First some background:

We've never granted the ability to 'change permissions' at the top level of your OU. However, by allowing folks to create objects, we give them the ability to be the object owner. In either AD or NTFS, an object owner has two implicit permissions that *aren't* explicitly in the ACL. These two permissions are:

  • Modify Owner
  • Change Permissions

In other words, the owner can make a different account the owner, and the owner can also set permissions on that object.

Since we allow delegated OU admins to create objects, they become the owner, and inherit the ability to set permissions. That is, prior to 5/11/2011 they could set permissions.

With WS2008, Microsoft provided a way to override the implicit permissions that an owner has. This can be done via the new "well-known sid" security principal called NT Authority\Owner Rights. If that principal is granted anything on an object, it overrides the implicit permissions.

So on 5/11/2011, you'll see in the documented perms, that we've set "Allow 'Owner Rights: Modify Owner". This means that an object owner still can pass the ownership, but doesn't have the ability to change permissions.

We didn't think this change would mean much to everyday use, but it turns out it did.

Our first problem was with an ACE that the Active Directory Users and Computers tool puts in place--trying to be helpful. For example, when you create an OU, you may see a checkbox labeled "Protect container from accidental deletion." When checked, that results in an inherited ACE applied to the OU. UWIT creates all the top-level delegated OUs, and on about 80% of those we checked that box. Prior to the above change, that didn't cause a problem when an OU admin went to create a child OU because the creator/owner could still set the inherited permission on the child OU. But after the above change, this ACE would prevent the child OU from being created. We've since removed the problematic checkbox on all top-level delegated OUs, so this isn't a problem moving forward.

The second problem was with delegating who could join a computer to a computer object an OU admin created in their delegated OU. Yep, as you probably can guess, with owner perms, prior to the change this was no problem, and after the change, this was a problem. In this case, the (computer) object was created, but the ACEs representing join perms were only set to the creator/owner, not what was specified. And no error message is raised about Active Directory Users and Computers failure to set the desired join permissions. We've fixed this problem by granting OU admins the ability to modify permissions on computer objects, and you'll see that represented in the documented permissions too.

So in both of those cases, the behavior prior to 5/11/11 is in effect.

Finally, I should mention one other delegated OU permission change we made on 5/11/11. Because we grant pretty broad permissions, prior to 5/11 it was possible for OU admins to rename their delegated OU by themselves. But now they can't rename their top-level OU. We don't object to a name change of your top-level OU, but there are implications based on the many namespaces that are linked to that OU name. So we'd like to be involved in name changes of that top-level OU to ensure that everything dependent on that name is properly adjusted.

Engineering; UW Infrastructure6/20/2011 11:40 AMBrian Arkills
  
About 2 years ago, I wrote about how the UWWI user displayName provisioning works. Last Friday, there was a slight change, and this post is to amend that prior post with the new behavior.
 
UWWI user displayNames are used most prominently by UW Exchange, but are also used by other applications that leverage UWWI, such as UW Sharepoint.

 

So the change itself is small, but it potentially affects quite a few UWWI users. In specific, any shared UW NetID can be affected by this change. Back in 2009, I wrote:
 
"Upon provisioning, our account creation agent, fuzzy_kiwi, does some complicated name parsing logic similar to what I'll describe below. For uw netids where it doesn't find a PDS entry, or where it isn't allowed to publish the name, it stamps the uwnetid value on the name attributes. And for some uw netids, that initial value is where things end."
 
Until last Friday, that was the end of the story for all shared UW NetIDs. The only way they got a displayName value was via our account provisioning/password agent, and the only displayName value they could have was the uwnetid value.
 
The account provisioning & password agent now uses different logic to determine the displayName it should populate for shared UW NetIDs. For shared UW NetIDs, it instead uses the PDS displayName value with some minor manipulation to improve the casing.
 
What is the PDS displayName attribute?
 
Well, here are some factoids about the PDS displayName attribute:
  • It generally isn't used very widely (because there are many other name attributes which generally have more interesting data)
  • All values in it are upper case
  • There isn't a self-service method to modify the PDS displayName attribute, but the administrators for a given shared UW NetID can call the UW-IT Help Desk to get it modified
  • The source for PDS displayName attribute value for shared UW NetIDs comes from the UW NetID system
  • The source for PDS displayName attribute value for other UW NetIDs varies (and is generally what is considered the "official" source of data)
Some of these factoids about the PDS displayName attribute may change (perhaps radically) in the future, as it may be the attribute used to deploy the UW NetID displayName solution I talked about 2 years ago.
 
So the UWWI user displayName for shared UW NetIDS will now be upper case?
 
No. We do a slight modification to the data to make it more readable/usable. We lowercase it all, and upper case only the first character in each "word" in the data. So generally, it looks pretty good, but there are some cases where the first character of a "word" is a special character, and then it doesn't look quite right.
 
Example UWWI user displayNames that might come out of this are:
  • "Brian D. Arkills"
  • "Arts Sciences Deans Office"
  • "Temporary Patron"
  • "Simpson, Bart"
  • "Brian Arkills 'admin'"
Note that this logic doesn't attempt to re-order the data for consistency, so "lastname, firstname" isn't re-ordered to "firstname lastname". But keep in mind that generally Shared UW NetIDs don't have a person's name on them, but instead have a department's name or something similar.
 
So when would a UWWI user that is a shared UW NetID benefit from this new displayName provisioning functionality?
 
Here are the ways this new logic can be triggered:
  • UW NetID password set (usually happens after UW NetID creation)
  • UW NetID password change
  • UW NetID rename (a rename isn't recommended unless you can't avoid it)
And that's it. This logic change is only implemented in the account/password provisioning agent, not in ILM.
 
Earlier I mentioned the UW NetID displayName solution that I revealed 2 years ago. That solution isn't dead, it just hasn't been prioritized. And we still see that solution as simplifying the complexity, providing self-service management for all UW NetIDs, and most importantly putting control into your hands.
UW Infrastructure6/8/2011 9:39 AMBrian Arkills
  

Back in April, we received a request to open visibility on the userAccountControl attribute for all UWWI users. This was to enable UW-IT's Unix Engineering team to leverage UWWI for a Xen eScience project they were working on.

In specific, this was a requirement imposed by Likewise, a Unix-AD interoperability solution, that is nested within the XenServer virtualization product. See http://lists.likewiseopen.org/pipermail/likewise-open-discuss/2009-May/001179.html for more details on the technical requirements.

We didn't see any serious threat associated with this, and given the interest in Likewise captured by the Delegated OUs survey we had just completed, this seemed like it fit with what other clients would need. So that change was made.

On the related topic of Unix-AD interoperability, I know several departments are interested in this topic, and over the past several months there have been a couple UW-IT projects which touched on this, but aside from the XenServer project noted above, I'm not aware of anyone actively doing/working on this in UWWI yet. If you are looking at this, I'd love to hear from you--it'd be good to start capturing details on some of the solutions being used.

Engineering7/2/2010 11:19 AMBrian Arkills
  

ADAM (Active Directory Application Mode), now called AD LDS (Lightweight Directory Services) is a standalone LDAP server from Microsoft. AD LDS has been around for awhile, but it's never gotten the notice that it deserves. Personally, I've always been intrigued by LDS, but I've never taken the time to give it a closer look. Over the last year, I've been hearing interesting tidbits from other universities about how they have used LDS to solve some hard problems, so I've become more intrigued about it.

Then a few months ago, I was fortunate enough to be able to attend a presentation given by Dmitry Gavrilov from Microsoft. Dmitry is the core developer of LDS, and has been on the core developer team since project inception in 2002. This was a great opportunity for me to fill in the blanks in my awareness of LDS functionality, and possibly to find out whether it might be useful for the UW.

LDS has a flexible, barebones schema. This is because the intention was provide a LDAP directory product that didn't come with AD's network OS overhead. LDS doesn't require AD, but when used with AD, can provide some very interesting and useful functionality that AD by itself can't provide.

LDS does not have a user objectclass (i.e. no objects with objectclass=user), instead: there are two kinds of directory objects that can bind:

  • objects with objectclass=msDS-BindableObject, for an ADAM-based "user" principal
  • "User Proxy" objects, i.e. any object with objectclass=msDS-BindProxy

"User Proxy" objects are *very* interesting, and are the source of functionality that AD itself can't provide. Let's look closer at User Proxy objects.

At creation time, User Proxy objects are associated with an existing Windows user account, either an account local to the LDS server or a Windows domain account trusted by the LDS server, with the SID of that Windows user account stamped on the User Proxy object. User Proxy objects have *no* password set on them. To login with a User Proxy object, you do a simple LDAP bind, sending the LDS server the password associated with the Windows user account represented by the SID stamped on the User Proxy object. LDS takes the simple LDAP bind request, does a LsaLookupSids() call to find the Windows authority for the associated SID on the User Proxy object, and then finally LDS proxies an authentication attempt to that other Windows authority by performing Windows impersonation via a LogonUser() call with the password value provided in the simple LDAP bind. Assuming successful authentication, the user then has a logon token issued by LDS.

*None* of the user attributes associated with the Windows user account are proxied into this logon token. Instead, only attributes associated with the User Proxy object are associated. This is an important detail, because it means you don't get any of the underlying Windows user account's info, but you also gain control at the LDS server level of what's in the logon token.

Any given LDS server can only have a single User Proxy object associated with the same SID, however, you can have as many independent LDS servers as you'd like all with User Proxy objects which point at the same SIDs.

For some scenarios, this could be one solution to the problem most universities face of needing to delegate some AD user attribute administration across many departmental IT groups, but not having the ability to delegate to all those departmental IT groups. You no longer need to collect information to determine which user goes with which group, or to solve the multiple departmental affiliation problems. You simply have departmental IT groups which need to control some user attributes for specific applications run their own LDS server, and use it to set the user attributes they need, while still leveraging the centrally provided authentication.

Note that for this to work, you have to be able to control which LDAP server the client/application is using. So this isn't a solution for tightly integrated Windows user attributes like home directory, roaming profile, etc.

However, it's my understanding that some universities use this functionality for non-Windows OS support like Mac OS or Unix clients. They point those clients at the LDS LDAP server, and get central authentication along with departmental control of the necessary user attributes expected by those platforms.

It's worth pointing out that the functionality provided by LDS's User Proxy feature is similar to the functionality provided by Virtual Directories that I blogged about recently. You could gain some of the same control and integration by using a Virtual Directory product, based on product specifics. The primary differences are that LDS is a Microsoft product and its free. Of course, there are more detailed differences under the hood.

Two other tools included in LDS are worth noting:

  • ADSchemaAnalyzer is very useful, allowing you to compare the schema of two LDAP directories.
  • ADAMSync provides a simple out-of-the-box DirSync solution to copy AD data into LDS. It doesn't handle complex data transformations like ILM/FIM do, nor does it sync schema.

All in all, LDS is a worthwhile tool to have in your toolbox, and I'll now be on the lookout for scenarios where it'd be a good fit.

Engineering6/28/2010 9:46 AMBrian Arkills
  
A couple months ago at the Windows HiEd conference 2010, I had to miss a really great talk by a good friend at Stanford, Sean Riordan. Sean was talking about their project to roll out a service catalog with automated order processing and provisioning for the central IT department at Stanford.
 
The idea was to deliver:
  • A menu of services
  • Order processing, with workflow that streamlines authorization, required info, billing, and ticketing. And which knows about your existing services and the details associated.
  • Order provisioning, using automation where possible.
In other words, the kind of experience you get at Amazon or some other online service.
 
Very fancy stuff and quite enterprising. And not something I hear about at many universities. I've heard about some automated ordering/provisioning for single services (like virtual servers) at universities before, but never for an entire catalog of services.
 
I know a few other folks here at the UW managed to catch this presentation, and I just watched a recording of it myself. I thought the material was interesting, and worth wider sharing.
 
To view the presentation recording go to:
 
I'd be happy to send folks along to the presenter, so if you have questions send them to me, and I'll pass you along.
6/17/2010 11:30 AMBrian Arkills
  

While putting together the Windows HiEd 2010 conference topics several months ago with Microsoft and other conference organizers, I suggested we have someone from Microsoft talk about "NextGen AD" since there had recently been some hubbub about this at PDC. What we got was a session on System.Identity. System.Identity shouldn't be confused with ADFSv2 or the Windows Identity Foundation, both of which have shipped in the last 6 months.

You can see the PDC session that covers System.Identity which started the hubbub. And you can read a very interesting write-up about System.Identity here.

Microsoft observed that existing directories have design constraints that make them difficult to use for all of an application's identity needs, so many applications end up devoping extensions to meet this unmet need. But each of those applications are re-inventing the wheel, and these custom extensions are often a stumbling block to getting them to interoperate with each other. And all these applications basically result in AD data getting copied into the appplication's SQL database. Microsoft also observed that a key failing was not allowing enough flexibility in defining relationships for any given identity. For example, in Active Directory, you really only have security groups to define relationships with. But groups only represent "member" relationships, have privacy limitations, and require that the "member" be in the local AD.

System.Identity is designed to help eliminate the wasted development costs put into each application's identity needs. It does this by providing an identity model which is flexible and is not wedded to any particular authentication or authorization system/protocol. In other words, it provides an identity abstraction that an application can leverage, so that the application doesn't have to worry about implementing support for authentication protocols, user settings sometimes called the "profile" of the user, and relationships that typically are used for authorization decisions. In the System.Identity model, relationships are "first class" entities, as opposed to a second class after-thought.

System.Identity is currently a community technology preview (CTP) experiment. Microsoft is hoping that others see the value of a common application identity framework which can leverage existing directory and identity technologies, but allow the greater flexibility needed by each application.

You can find out more at http://connect.microsoft.com/SystemIdentity.

It remains to be seen whether this is part of a larger NextGen Active Directory strategy or just another incremental step forward.

6/17/2010 11:04 AMBrian Arkills
  

Over the past several months I've seen 4 or 5 presentations on ADFS2. And of course, ADFS2 was released to the web on 5/5/2010. And as announced previously on this blog, we're actively in a partnership with Microsoft around Shib interop with ADFS2.

So it's time for all this material to get into a blog post. :)

First, see http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=118c3588-9070-426a-b655-6cec0a92c10b&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:%20MicrosoftDownloadCenter%20(Microsoft%20Download%20Center) for the ADFS2 download.

Second, the terminology is important to understand, so here's a decoder ring. I've put in bold the terms I think make the most sense to use.

Security Token Service (STS) = Identity Provider (IdP) = Issuer = the server that verifies identity, adds claims, and issues a SAML token.

Claims = Attributes = Assertions = information about the identity associated with the login token which the token issuer claims/asserts are true.

Relying Party (RP) = Service Provider (SP) = Consumer = a server which wants to consume a federation token, possibly using claims in that token for authorization decisions.

One really important concept to understand about the federation IdP/STS role server is that it is *all* about issuing a token in a standardized form, which in both the Shibboleth and ADFS (either version) cases is a SAML 1.1 token. It takes some form of authentication credentials and "transforms" those provided credentials into a SAML token which you can use in a standardized way. So if you are a Math or Coding geek (as I am) you might think of the federation IdP/STS server as a function that transforms non-standard input into a standardized output. Or if you've travelled at all, you might think of the IdP/STS as the money exchange you have to visit each time you go from one country to another. You have currency--it's just that no one accepts the currency you have, and so you need to get the currency that is accepted locally.

Another really important concept to understand is that the Consumer only knows about a single Issuer, or put another way: the Consumer only trusts a single Issuer. I talk about the federated access dance down below.

Third, you might be wondering a lot of various questions.

Why federate? Why not use Windows Integrated Authentication?

Common answers are:

  • you've got stuff to protect that isn't Windows based
  • you need to give folks outside the UW access
  • you are porting your app to the cloud

Why use ADFS when we've already got Shibboleth?

The answer there is much more complex. As I've already said previously, ADFS2 has interop features with Shib, so stuff that uses an ADFS2 Security Token Service (STS)--which means the same thing as an IdP in Shibboleth-speak--can accept SAML 1.1 login tokens from a Shib IdP.

But there is more to federation than just issuing login tokens. You also need to accept federated login tokens, i.e. the role that Shib calls a Service Provider (SP) and ADFS calls a Relying Party (RP). And assuming you are accepting login tokens from elsewhere, you also need to perform token transformation or claims transformation. Claims transformation is the process of taking claims issued by another authority and tranforming them to mean something within your environment, typically for use in authorization or to populate a user's details in an application.

Something like Sharepoint 2010 has built-in support for ADFS2, meaning that as a RP, it understands ADFS2 issued tokens and can leverage ADFS2 claims transformations. As I understand it, *only* using Shibboleth to federate Sharepoint 2010 means you'd need to write code for some custom claims transformations.

So the more appropriate question is can we leverage Shibboleth as the point of token issuance along with ADFS2 for the claims transformation? And the answer to that is yes. I've seen several demos of this working.

The trick then becomes figuring out whether you also want to be able to use ADFS2 for token issuance.

Why might you want to use ADFS2 for token issuance on top of Shibboleth?

Well, your Windows login token, i.e. the one you get when you interactively login to your computer via a Windows domain account, can be used to automatically get an ADFS2 login token without throwing yet another authentication prompt in the user's face. With the advent of UWWI delegated OUs, you might believe that reducing the number of logins is desirable and attainable for some population. Whether there is enough demand there isn't clear to me yet.

However, if it was something we wanted, then we'd next need to deploy a custom WAYF on the ADFS2 server.

What's WAYF?

WAYF stands for Where Are You From? WAYF is the mechanism where the client chooses what Issuer they'd like to authenticate with. You've probably used the WAYF associated with InCommon to choose that you are from the UW and want to login against the UW Shibboleth IdP. In this case, we'd need to have a WAYF that gave 2 options:

  • UW Shibboleth (i.e. Weblogin)
  • UW ADFS (i.e. UWWI).

Moving on from high-level stuff, let's talk more about details.

The federated access dance is something like this:

a) client goes to Consumer (a website)
b) Consumer asks client for a valid token
c) Client tell Consumer it doesn't have a token
d) Consumer sends a 302 redirect to client to talk it its Issuer (Issuer A)
e) Consumer hits the WAYF at Issuer A
f) Consumer chooses something from WAYF and gets sent a 302 redirect to their Issuer (Issuer B)
g) Client hits Issuer B, provides credentials
h) Client is issued a SAML token by their Issuer (Issuer B), gets sent a post redirect to the original Issuer (Issuer A)
i) Client hits the original Issuer (Issuer A), which takes the SAML token from Issuer A, and transforms it, issuing a new SAML token and sending a post redirect to the Consumer. The transform may drop some claims and add some new claims, all based on business rules specific to the trust between Issuer A and Issuer B.
j) Client hits the Consumer and presents a valid token
k) Consumer evaluates claims in token, and decides what resources the client has access to. In some cases (e.g. Sharepoint 2010), it might have its own "internal" STS that is used across a farm of resources, and issue yet another token to the client (sending us back to step i).

ADFSv2 supports the following possible authentication sources:

  • Active Directory
  • a "chained" SAML 1.1 token, i.e. a token issued by another federation server

If you have a different authN source (say MIT KDC) or to accept some other kind of token, you no longer can use the "out of the box" ADFS product, but need to build your own STS using the Windows Identity Framework SDK. Many online examples show you how to do this in a step-by-step approach, but this is not for the faint of heart. This point is important to keep in mind, because sometimes when people say they are using ADFS for some interop scenario, they aren't--they are using a custom STS that is really just a distant cousin of ADFS.

ADFSv2 supports the following possible directory sources for claims:

  • Active Directory
  • MS Active Directory Lightweight Directory Services (LDS)
  • SQL
  • LDAP directory which uses Windows integrated
UW Infrastructure6/17/2010 9:39 AMBrian Arkills
  

A couple months ago, I was fortunate to have been able to see a presentation from the University of Florida on their OCS deployment this past year.

One of the more interesting things about their presentation was that they funded their initial deployment, including the support staff, by recovering costs via power savings and securing one of the federal Green grants.

The U of F happens to have two campuses, and they calculated that meetings which involved staff from both campuses had a significant overhead in travel expenses--gas (and thus a carbon footprint), staff time to travel, parking snafus and the like.

By deploying OCS and round-table video cameras in a couple key meeting locations, they were able to significantly reduce the need for that travel, while also providing the added value of the other functionality that comes with OCS. I found this pretty interesting, especially since the UW is actively looking for ways to be more Green.

If you are interested, you can see a recording of the presentation here: http://mediasite.online.ncsu.edu/online/Viewer/?peid=f06b431c1a894a6d95f2a8d0e5d31cc9

Exchange6/15/2010 2:32 PMBrian Arkills
  
About a month ago, I had the opportunity to speak with quite a few virtual directory vendors. I've always been a little curious as to why you'd want to implement a virtual directory solution, so initially I was mostly satisfying my curiosity, but as I looked closer, I began to be very impressed and quite interested.
 
For a high-level intro, these virtual directory things are basically an abstraction layer which proxy requests to your existing directories and databases (and possibly other things). I once had a CS professor who claimed that all CS problems were solved by 1 of 3 solutions, one of which was abstraction, so you might imagine there are quite a few uses here. Keep in mind that a virtual directory is not a metadirectory or registry; a metadirectory has synchronization and associated latency whereas a virtual directory is working against live data in multiple sources.
 
The high level business use cases I heard (and saw) examples of are:
  • enables flexibility to make operational changes. For example, think of how Windows DFS allows you to swap out the physical server or even the platform/vendor running on that server. Same thing here. Clients are pointing at the VirDir, and you now have control where their requests end up.
  • enables performance improvements. You can do pre-processing, rewriting greedy/wasteful requests to be more efficient. Some VirDirs also do query caching, allowing you to avoid any hit on your data sources.
  • enables multiple views of data, i.e. you can virtually re-organize your DIT for an app which requires that the data be organized differently than you've deployed it
  • enables directory consolidation, e.g. all those apps that assume they can get person and group data from a single directory and end up using UWWI might instead go to a VirDir which queries both PDS and GDS instead. Also note that this directory consolidation might be app-specific, or it might be a first step toward permanently consolidating many directories.
  • allows extension of your directories using non-LDAP data sources. This benefit is something that Microsoft is looking at seriously as a strategic investment, because data stored in a LDAP directory has many limitations, and I suspect we'd also have many benefits here if we thought a bit about it.
  • chained authN, i.e. first look here, then look here
One demo I saw was a single Sharepoint 2007 farm shared by many Windows forests, because a VirDir was in the middle and used as the authentication/authorization provider which in turn chained those authentication events along to each of the forests.
 
Some of these VirDir products would allow us to implement business rules which filter out which UWWI memberOf values are returned to those clients which need to integrate to that data. In other words, this is a way to add additional filtering, authZ, etc, which isn't necessarily supported by the underlying vendor product.
 
Vendors I spoke with included: Optimal IdM, Radiant Logic, and SymLabs
 
All in all, it's something I plan to think more about, and figured folks might like to hear about.
Engineering6/15/2010 2:12 PMBrian Arkills
  
A week ago, on June 8th 2010, UW-IT announced that the long awaited Delegated OUs service was open for business.
 
A week later, we have 4 new delegated OUs.
 
For more information, you can visit the Getting Started info on this page: http://www.netid.washington.edu/documentation/gettingStartedWithOus.aspx.
6/15/2010 2:08 PMBrian Arkills
  
Folks might recall my post quite some time ago about adding UW Holidays to UW Exchange. See https://sharepoint.washington.edu/windows/Lists/Posts/Post.aspx?ID=98 for the details.
 
I've updated the Outlook.hol file noted in that post to include the 2011 UW Holidays. I've also fixed a mistake I made that left the last two holidays of 2010 off.
 
Enjoy!
Exchange5/21/2010 3:29 PMBrian Arkills
  
Over the past month or two, there have been a couple of minor documentation updates.
 
Most notably, we've taken over ownership of the "Activating Windows on Campus" documentation, and brought it up to date. Back in the Vista release timeframe we wrote that document and UWare chose to host it within their web directory. The UWWI service line includes the campus KMS service, so it was a natural progression that moving this document so it could be maintained over time was for the best. You can find that document via a link on the UWare site, linked from the UWWI document index http://www.netid.washington.edu/documentation/default.aspx, or directly at http://www.netid.washington.edu/documentation/activatingWindows.aspx.
 
The UWWI Architecture Guide has received a few minor updates mostly around documenting permissions.
 
http://www.netid.washington.edu/documentation/archGuide.aspx#visibility now lists which managed UWWI user attributes are visible by anyone with a UW NetID and http://www.netid.washington.edu/documentation/managedUserAttributes.aspx now includes that information as well.
 
http://www.netid.washington.edu/documentation/archGuide.aspx#selfWrite now talks a little about how some UWWI user attributes (managed or unmanaged) have permissions which allow the user themselves to updated them. A new document, http://www.netid.washington.edu/documentation/selfWriteUserAttributes.aspx, lists those attributes.
 
Looking farther back in time, last year we received quite a few requests to integrate applications with UWWI for authentication and authorization. Out of that flurry of requests, we wrote 'How to use UWWI for LDAP Authentication' at http://www.netid.washington.edu/documentation/ldapConfig.aspx.
 
I expect that we'll be adding quite a few new documents as we extend UWWI to support the features from the Delegated OUs project, so stay tuned for more updates.
UW Infrastructure4/1/2010 2:45 PMBrian Arkills
  
This morning we had a project kick off. Hooray!
 
One outcome of this morning's meeting was that a new mailing list called uwwi-discuss will be coming soon. It's purpose will be to facilitate discussion about the UWWI service in a multi-way interaction. We'll share project news with the community there, in addition to blog posts here, and hopefully get a feedback loop that'll hopefully improve the quality of what the project delivers.
 
The uwwi-announce mailing list will continue to be for service announcements.
 
If you'd like to get on the uwwi-discuss mailing list, let me know.
UW Infrastructure4/1/2010 2:32 PMBrian Arkills
  

For a press-release, see http://www.microsoft.com/Presspass/press/2010/feb10/02-24CIOSummitPR.mspx, and http://liveatedu.spaces.live.com/blog/cns!C76EAE4D4A509FBD!2155.entry, for a Microsoft blog about it.

 

The Microsoft U.S. Public Sector CIO Summit is happening today, and there is a presentation and demo happening there with our very own Terry Gray presenting.
 
I'd say more, but this is a public blog, and we're under NDA. :)
UW Infrastructure2/25/2010 1:24 PMBrian Arkills
  
For awhile now, we've been hoping to continue the work we envisioned back in 2006 (we called it "WinAuth phase 2" back then). And I believe we are on the cusp of embarking on that work. Hooray!
 
I'd like to thank everyone who filled out the survey. Your input is invaluable, and we'll be following up on it. And if you haven't yet filled it out, there is still time until March 3.
 
I did want to take a little bit of time and point out some of the high-level takeaways we're seeing from survey responses.
 
First, everyone who took the survey wanted to leverage UWWI user accounts. That's especially significant when you look at everyone who took it, and the broad spectrum of uses folks are interested in.
 
Second, there is a high degree of interest in including user attribute management in the project scope.
 
Third, almost everyone would move some computers in by the end of summer if they could. And about half of respondents have non-Windows computers they'd like to move in.
 
Fourth, in sharp contrast with what we heard a year ago, only 8% of respondents would *not* move computers in if we offered no migration assistance. 1/3 of respondents would like some migration assistance, but the majority of folks are more interested in a self-migration option.
 
Finally, the concerns and open feedback questions made it clear that there is quite a bit of education we need to do about what functionality is already in UWWI, what existing departmental processes might need to change if you adopted a Delegated OU, and the need to continue a dialogue about you'd like UWWI to be.
 
These high-level takeaways are very likely to shape the actual project work.
 
For example, it seems clear to me that opening the doors is the highest priority. So hammering out use policies, defining support processes, and documenting all the details that y'all will need are the important deliverables there.
 
In contrast, the migration tools and process we thought were the most critical piece of the project, now seem to be a much lower priority. Among that work, I'd still prioritize a bulk group import tool, because feedback from the ISchool pilot made it clear that this was the most painful part of their self-migration process.
 
Then in the middle of the priorities, we need to re-engage on user attribute management. We'll need to discuss the user attributes y'all highlighted, see if workarounds reduce the perceived need or not, and then prioritize those most widely needed.
 
We've been around the block with some of you a few times on envisioning a solution to the multiple affiliation (or unclear affiliation) issue for user attribute management. In other words, how do we figure out who should be authorized to assert an attribute value for any given user? Our last idea for solving this conundrum seemed to have pretty good support, but I'm not sure how widely the idea has been shared. Basically, the idea is to initially put the management of these attributes into the hands of the users by leveraging the UW NetID manage page. But instead of forcing users to know a obscure attributes values, such as an UNC path for their home directory, instead the user is presented with a friendly choice of various departments. For example, in the UI they might pick the "Ischool Home Directory", and the UI will map that choice to the obscure formula that ISchool uses for its home directory paths. We might further simplify things by allowing the user to choose "all ISchool UWWI values" in that UI--or to assert that the ISchool is their local IT support org. And this last possibility conceivably could open the door for us to allow departmental IT folks to assert values without the direct involvement of the end user.
 
Now, taking a step back, let's touch on the concerns raised in the survey responses. Not everything is roses. :)
 
There are various 3rd party application integration challenges that we'll need to look more closely at. Weaker password policies in the UW NetID system than some of you require are clearly an issue which will need more attention, and we'll work to support changes there.
 
Concerns about losing control of access control to your computing resources is clearly something we'll need to address. That one is a bit tough, because while it has some educational components to it, and there are definitely steps you can take to reduce your risk, at the end of the day this issue is really about trust. Are UW Tech domain admins trustworthy? Will you be compromised by some other department in UWWI? We'll do everything we can to earn your trust, and if you have specific ideas about things we can do here, please send them.
 
Another concern that I've heard a couple times over the last few weeks both in and out of the survey responses is a concern that UWWI Delegated OUs might be "like the UW forest". There are a couple concerns wrapped up in that, which include:
  • a concern about the high number of domain compromises which over time have happened in the UW forest and that similar things might occur,
  • a concern that UW Technology will be a "big brother" forcing things on folks that they don't want
  • a concern that at some later date UW Technology might pull support for this out from under folks
Obviously these concerns get at the shared nature of this undertaking. Addressing these kinds of concerns can only happen via developing use policies which seek the common good and by securing the commitments of those departments which adopt a Delegated OU. So I'd encourage folks with concerns like this (and the access control one above) to take an active part in helping draft and vet those use policies.
 
Well, this has turned into another of my long write-ups. But regardless, I do want to end by saying that I'm excited about what's ahead, and look forward to partnering with many of you!
UW Infrastructure2/24/2010 11:29 PMBrian Arkills
  
A couple days ago, Mark Russinovich blogged about the Machine SID Duplication Myth.
 
Initially, I had a lot of problems with his assertions until I realized that Mark wasn't claiming anything about all the other unique identifiers that can follow a computer around.
 
I know I'm not alone in wishing that Mark had been more clear in pointing out the fact that there are other unique IDs which sysprep is still very valuable for. Here's an example of another Microsoft employee who tried to fill the gap in Mark's blog post: https://blogs.msdn.com/aaron_margosis/archive/2009/11/05/machine-sids-and-domain-sids.aspx
 
Applications which are tied to unique identifiers that definitely benefit from sysprep include:
  • WSUS
  • SCCM
  • SCOM
  • Altiris
  • KMS
  • AD
  • ADAM
  • Sophos Anti-Virus (with thanks to Matt McAdams)

And I'm sure there are more. So don't stop using sysprep when you clone a computer.

Engineering11/10/2009 10:07 AMBrian Arkills
  
On Friday, the RC1 for "geneva" shipped. See http://tinyurl.com/yg26mw5 for bits.
 
There are a variety of articles, guides, and resources being authored. Here's a smattering of them:
 
MSDN Magazine, November issue, Using Active Directory Federation Services 2.0 in Identity Solutions
 
Microsoft Patterns and Practices: Claims Based Identity & Access Control Guide.
 
Variety of WIF developer tools available at http://blogs.msdn.com/vbertocci/, including:
The Identity Developer Training Kit
Claims-Driven Modifier Control
4 months ago, there was the Geneva/MOSS step-by-step guide, which showed how to enable MOSS 2007 (not 2010), leveraging Office 2007 SP2 client apps, to use ADFS2 for authN/authZ via claims:
http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=57602615-e1ee-4775-8b79-367b7007e178. And there was a ADFS team blog post on this too: http://blogs.technet.com/adfs/archive/2009/06/16/office-integration-with-moss-and-adfs.aspx. I'd expect a version of this focused on MOSS 2010 sometime in the coming months.
 
And two months back, there was the successful Liberty Alliance testing:
 
Engineering11/10/2009 10:00 AMBrian Arkills
  
So some astute reader asked what I was doing with ADMT, wondering whether there were some domain migrations happening.
 
No, no domain migrations happening just yet, but there's a lot of background material here which is likely useful to hear about.
 
So each time an existing campus Exchange deployment has chosen to merge with the UW Exchange service, we've helped migrate those existing Exchange mailboxes into the UW Exchange service. The general process required to migrate involves having a sidHistory stamped on the destination user. So you might notice that some UWWI user accounts have sidHistory on them. And that's because we've used ADMT as part of these Exchange migrations to allow the mailbox migration to happen.
 
However, these one-time migrations have not been smooth, nor have they been especially well designed. In one case, a department incorrectly asserted the wrong source user be mapped to a UWWI user, and we had to subsequently remove the sidHistory. Mapping the source users to the destination users has been a very time-consuming process.
 
And the ADMT implementation was to a box that has many other functions, and used a local sql express instance. When you use a local sql express instance with ADMT, it means that you can have only a single ADMT console doing migrations of any type.
 
So the work behind the prior blog post was to clean up this state of affairs while retaining the migration information associated with the migrations we've done in association with Exchange.
 
That work also serves another purpose which is to get us a bit closer to being ready to allow for migrations into UWWI. We some vision in terms of how migrations might work, and this is probably as good a time as any to share that vision. :)
 
So we envision this sort of workflow:
  1. You go to a webpage which explains the process, and which tells you what information you need to gather to start.

    This will involve a tool or directions on how to use ldifde to generate a list of your users and separately your groups.

    You'd then need to manually remove any users/groups you don't want to migrate.

    And you'd want to pre-screen this list to make sure that the usernames are valid uw netids. And if not, you'd drop or rename those users.

    This webpage would also tell you that you need a trust to UWWI, to set a couple configuration settings on your domain/domain controllers to allow sidHistory migration. And there'd be some other details noted that would need to be ironed out, things like making sure specific users are authorized to go to the webapp and to the ADMT database.
  2. You'd then go to a web application/service of some sort where you'd submit the edited output of the tool/ldifde.
  3. The web app/service would check your submitted list for problems, like no uw netid, and give you back a webpage where you'd see details about the source/destination accounts so you could vet the user migration action. This would allow you to say no cop\barkills, displayName=Bart Kills is not the same person as netid\barkills, displayName=Brian Arkills, so drop cop\barkills until we can figure out what's going on with that user.
  4. The web app/service would then carry out the migrations. Behind the scenes ClonePrincipal would be used to add the sidHistory, and we'd insert the right stuff into the ADMT database to make it look like ADMT had done the migration. We wouldn't use ADMT itself because ... well ... ADMT has this bug. If a given destination user already has a sidHistory, then ADMT overwrites it. And that's not good.
  5. Now, this is the step where things are really cool. You'd then download a copy of ADMT 3.1 and install it wherever you like. During installation, you'd point it at our ADMT database. Then you'd use the various ADMT wizards to migrate your own resources, in whatever timeframe meets your needs. No need to coordinate with UW Technology (and the best thing of all from my perspective is that I don't have to be involved).

    You'd use the computer migration wizard to send out agents to your computers (on your schedule) that would automatically reACL them using all the migrated users/groups, would join them to the NETID domain, and reboot them.

There are a ton of details not fleshed out here, but as I said this is a vision. And the prior blog post was about getting things moving toward this vision.

UW Infrastructure6/18/2009 8:45 AMBrian Arkills
  

This blog post focuses on an ADMT technical gotcha I ran into, where the documentation wasn't really very helpful. This may not be very widely useful, but someone is definitely gonna run into one or more of the problems I did, so I figure it's best to get this out there.

In specific, I had a working ADMT 3.0 instance that used a local sql express database and I wanted to move to a full sql database that was remote to the ADMT console, and also upgrade from ADMT 3.0 to 3.1.

The documentation around ADMT talks a lot about a MS_ADMT instance of the database. But this is only the case when you choose to install the database locally. And it turns out that if you choose to do a remote sql database that you don't need to use MS_ADMT. This tripped me up.

Also there is an order of operations issue ala the chicken and egg when you want to use a remote SQL database for ADMT. You need to precreate the ADMT database before installing ADMT, but you must install ADMT to get the special tool that allows you to create the ADMT database in the format that ADMT will recognize. This also tripped me up.

Migrating from ADMT 3.0 to 3.1 is also a bit tricky. The two versions apparently have a different DB format. The documentation alludes to being able to use the ADMT database creation tool to use a 3.0 database as an import source when creating a database, but in my experience it didn't actually work. But the documentation is hazy on this point, so maybe my reading of it isn't quite what was intended.

Finally, there's the fact that ADMT 3.1 only works on WS2008 64bit. Which can be a problem if you don't have remote access to your SQL server and it is running 32 bit WS2003.

Here's my step-by-step recipe to success, for those who want to do the same thing as me.

a) copy your local ADMT db, usually located at C:\WINDOWS\ADMT\MSSQL$MS_ADMT\Data
b) use your ADMT 3.0's instance of admtdb to create a db on your remote sql server. You do *not* need a separate sql instance from the default sql instance. Just specify the sql server name. The account running admtdb must have the ability to create a database on that sql server. So for example, you'd say:
"admtdb create /s:mysqlserver.dns.whatever"
c) ask your DBAs to import the local ADMT db file to the ADMT database that was just created
d) install admt 3.1 on ws2008, say you have a remote sql server, give it a bogus db location, and choose finish (can't cancel)
e) use the ADMT 3.1 admtdb to upgrade the database to be compatible with admt 3.1. So for example, you'd say:
"admtdb upgrade /s:mysqlserver.dns.whatever"
f) uninstall admt 3.1
g) install admt 3.1 giving it the remote sql server above
h) enjoy!

6/17/2009 1:39 PMBrian Arkills
  
Over the past 4 months, I've been working on improving the UWWI directory synchronization story. A littel while ago, I posted an technical introduction to ILM, here:
 
https://sharepoint.washington.edu/windows/Lists/Posts/Post.aspx?ID=103
 
In that post, I talked about some of the "gotchas" in our existing implementation.
 
I'm now back to report that most of those have been fixed. :)
 
I spoke of the high number of disconnectors, which resulted in a 3 hour cycle in our environment. That problem has been solved. The solution was to switch which management agent (MA)project objects to the metaverse (MV). Instead of having the NETID Active Directory MA project, now the PDS custom MA projects to the MV. This means that instead of re-evaluating about half a million disconnectors (from the PDS MA) every time it runs, it only re-evaluates a small handful (from the NETID MA). The ride getting to this fixed state was a bit bumpy on the back-end, and I learned quite a bit along the way.
 
For example, once a given MV object has been projected, removing the projection rule which resulted in its projection does not remove the relationship with the original MA object. This can result in lots of awful behavior, especially when it happens in large numbers. One fix is to delete the MA space, reimport everything for that MA, and re-run a sync. Deleting the underlying MA object removes the relationship. But it also incurs an unexpected penalty as the MV space needs to re-evaluate everything. And that particular MV object may get deleted as a result, then get re-projected by the other MA.  
 
But the results are rather dramatic: our entire sync cycle now takes about 10 seconds to run. I've said elsewhere that it's about a 1200% improvement, because we went from a 3 hour scheduled cycle to a 15 minute scheduled cycle. But in reality it's more like 600% improvement because the actual time went from 100 minutes to 10 seconds. Regardless, it's very good.
 
Another gotcha I've fixed is including more than just the uwPerson objectclass from PDS. There was really no reason to limit what classes from PDS contributed info, and so we did away with that limitation. Along the way, we also added uid synchronization to our ILM feed to UWWI. Combined with the non-uwPerson objectclass fix, this means that all accounts in UWWI which have been provisioned with a UW uid will have it present in UWWI on their uidNumber attribute.
 
However, I do need to report that one of the more visible gotchas is still outstanding. The name situation remains in the state I reported previously. There have been a couple new problems reported, and awareness of this issue seems to be spreading, but it hasn't reached enough critical mass yet to justify prioritization over existing projects. I'm hopeful that will happen within the next 6 months however.
 
My previous coverage of that problem was mostly just an overview, so it's probably worth taking the time now to cover the problem in greater detail.
 
Here's the skinny:
 
Upon provisioning, our account creation agent, fuzzy_kiwi, does some complicated name parsing logic similar to what I'll describe below. For uw netids where it doesn't find a PDS entry, or where it isn't allowed to publish the name, it stamps the uwnetid value on the name attributes. And for some uw netids, that initial value is where things end.
 
As you know, ILM connects NETID users with PDS objects, and keeps the name attributes in synchronization according to some complicated name parsing logic.
 
The recent fix to include non-uwPerson objects from PDS doesn't really change the name story at all, because the name attributes in PDS that ILM uses as seed information are not present on non-uwPerson objects.
 
This is an important point to understand. non-uwPerson objects don't have any name info that ILM wants, so ILM does nothing with them.
 
In contrast, uwPerson objects do have the naming attributes ILM cares about. However for those uwPerson objects, only two classes of users have any ability to modify those naming attributes. Only UW employees and students have any ability to change that name information. Employees can use ESS to modify that name info, and students must talk to the Registrar to modify it (yes, a phone call or an in-person visit, nothing online). To generalize the logic for these accounts to an understandable form, there are two important pieces of info. First, a flag which indicates whether your info is publishable. Second, the name you've given ESS or the registrar. If you've agreed to publish, then parsing happens, and your displayname comes in the form "Brian D. Arkills" or "Brian Arkills" depending on how many substrings there are in what you gave ESS/Registrar. If you didn't agree to publish, then parsing happens and your displayname comes out in a format like "B. Arkills". In some odd cases, your displayName can come out as just "Arkills" or "Brian". There isn't much flexibility here.
 
People who aren't UW employees or students have no ability to create the editable naming attributes ILM cares about. In other words, affiliates and sponsored UW NetIDs, shared NetIDs, temporary NetIDs, etc. all have no ability to change what their UWWI displayName is. So what displayName do they end up with? All of these accounts end up with a displayName of "B. Arkills". The flexibility here is non-existent; there is no back-end solution here where someone can edit something to fix this for a given user. The entire system needs to be overhauled to fix this.
 
Yes, this is an awful state of affairs, and yes, I agree this should get fixed sooner than later. And if you agree, you should talk with your UW Exchange representative and ask them about raising the priority of this feature.
 
So, how or why did things get designed this way?
 
Well ... it turns out that back when we rolled out Exchange, the MSCA initiative was limited to *employees*. No students, no non-employees. And all the engineering inquiries into this design constraint came back that it was a firm limit. But then rather quickly, that limitation fell by the wayside , but engineering wasn't given the time to revisit and refactor. So the combination of poor design constraints and then changing the scope after launch w/o revisiting the solution was a pretty big contributing factor.
 
Another factor was that our primary engineer was convinced that there was a significant value to having consistent displayName formatting within the Exchange GAL. So he wanted "Brian D. Arkills" or "B. Arkills" only. This is why almost every displaynames ends up in one of those two formats.
 
Another factor was that the name source information situation isn't pretty today. There's the official name info, which is case insensitive. Folks with mid-name capitalization lose out in that, and it's rather hard to edit this piece of info. Then there's the editable name info coming from HEPPS (the HR source system) and SDB (the Registrar source system). But the info coming from those sources has no input validation, and is not guaranteed to follow any format. So while the user has lots of control, it's a nightmare to figure out whether a name will come in as "Brian Arkills", "Arkills, Brian", "Brian David Arkills", "Brian David Joe-Bob Arkills", "Mr. Brian Arkills", "Mr. Brian Arkills Sr.", etc. Obviously the number of permutations are endless, and it's impossible to predict what format the data will be in.
 
And that's it.
 
But in a very real sense, there is a bigger picture problem here. The problem here is that there are many source systems, and each of them do things differently. Getting all those source systems to implement naming information, input validation, publishing flags, etc is an uphill battle. And when someone is in more than one of those source systems, then you have to choose which source wins.
 
The solution we imagine bringing to this situation is to implement a UW NetID level solution. From the UW NetID Manage page you assert your name. This eliminates the problems that come from multiple source systems, and the person vs. non-person issues. If you have a UW NetID, you'd be able to set the name. Period. It also allows us to implement input validation in a single place, and restrict the formatting to something reasonable and predictable. Obviously, UWWI would be one of the first to use such a mechanism, and hopefully other source systems will begin to see the value in having a single name across all systems and also leverage it instead. We imagine a state of affairs where people incrementally populate this new bit of name information, and UWWI continues to use the existing logic, unless this new info is present.
 
So in summary, things in the UWWI directory synchronization space are much better, but we've still got this name blot on our balance sheet. And hopefully we'll get fixing that prioritized soon.
UW Infrastructure5/12/2009 8:11 AMBrian Arkills
  

Beginning with Vista, Windows auditing moves from 9 "legacy" categories to 52 subcategories nested beneath those 9 legacy categories. And as if that wasn't enough change, beginning with Vista there is also a renumber of *every* security event ID, changing many of the events, combining quite a few, and enhancing the quality and quantity of information within many events. This combination of changes can leave a Windows administrator lost when looking at security logs.

So that we are all on the same page, let's double back on how things worked prior to vista.

Traditionally, via group policy (or local policy) you configure what categories (for either success or failure ) can be audited, and then where applicable, you configure a SACL on the resources to generate that kind of audit in your security log. For some types of events, there is no a SACL to configure; simply enabling the category results in security events being generated when the applicable action happens. Domain administrators traditionally configure categories in a sane manner for their entire domain, leaving out configurations which they deem to generate noise. And there was quite a bit of collaborative work around "good configurations" from various organizations focused on this legacy approach to auditing. But there is almost no work yet based on the new auditing paradigm.

By now, you are probably very curious to see a list of the new subcategories. And the nice thing is that you can ask your vista/ws2008 box what they are. Run "auditpol.exe /get /category:*" at a command prompt to see that list. And as a bonus, you'll also see what the active auditing policy is for that box.

So if you continue using your legacy auditing category settings with Vista and beyond, those legacy categories apply to all the subcategories nested beneath them, i.e. if I enable success audits for the logon/logoff category, then all the logon/logoff subcategories are enabled (and those subcategories are: Account Lockout, IPsec Extended Mode, IPsec Main Mode, IPsec Quick Mode, Logoff, Logon, Network Policy Server, Other Logon/Logoff Events, Special Logon). In other words, you'll see lots of noisy ipsec events, even if all you want is logon events. What is worse is that as of today, there is no direct ability via group policy to configure subcategories. And as I already pointed out, there are high volume subcategories which are mostly noise within categories where high interest events occur; so you want to be able to configure at the subcategory level.

There are indirect methods to use group policy to configure subcategories (using scripts and scheduled tasks), and there is a group policy setting which tells vista and beyond to ignore the legacy category settings they get from group policy and use whatever is set locally. Moving to a local setting model can be dangerous from a security perspective, and is very hard to manage at any scale. And the indirect group policy method is unreasonably complex. However, with ws2008r2, there is the ability to directly set subcategories via group policy. Existing webpages indicate this functionality will only work on ws2008r2 and Windows 7 clients, but I suspect that information is incorrect, and so I'm personally waiting a bit longer to see whether this new functionality improves the story before moving forward on the indirect group policy method.

Earlier I mentioned that there was next to no work based on the new auditing paradigm. There are a couple exceptions:

Randy Franklin Smith's Ultimate Windows Security website is an open collaboration around documenting Windows security events, settings, and auditing. Randy recommends a subcategory-focused auditing baseline on this website, http://www.ultimatewindowssecurity.com/wiki/RecommendedBaselineAuditPolicyforWindowsServer2008.ashx.

Eric Fitz at Microsoft is a master of auditing. See his 'Windows Security Logging and Other Esoterica' blog at http://blogs.msdn.com/ericfitz/default.aspx.

Ned Pyle on the MS ASKDS blog, writes about some cool auditing tricks in this post: http://blogs.technet.com/askds/archive/2007/11/16/cool-auditing-tricks-in-vista-and-2008.aspx. This post includes tricks for:

  • finding when users are elevating via UAC
  • find out who is making AD changes unrelated to Accounts aka who the heck has been messing with group policy?!
  • find out what is changing a registry value at random intervals

And if you'd rather not wait around to see if Microsoft fixes the subcategory group policy issue, you can implement an indirect, complex group policy based method by following the directions at http://support.microsoft.com/kb/921469.

Moving on from auditing policies to all the other changes ...

There are now some auditing events which can *NOT* be turned off. These are high security sorts of things like clearing the security log and service shutdown. Hooray!

With Vista and beyond all the eventids are new. So you will never see a 528 or a 540 or a 680 eventid on a vista box. Instead you'd see a 4624 event (yep, all three of those events are smashed into a 4624 now). In general, you can find the newer eventid from the old ones you might be familiar with by adding 4096 to the number. But this isn't true across the board. Check out Randy's website to see what has been verified, and detailed explanations. You might also check out the Microsoft version of the new security events at http://www.microsoft.com/downloads/details.aspx?FamilyID=82e6d48f-e843-40ed-8b10-b3b716f6b51b&DisplayLang=en. But I wouldn't put much trust in the MS version; I've already seen several mistakes and many omissions in it. Randy's website is more accurate.

In general, the quality of the information in these new security events is also much, much better. For example, in the old events, you rarely got an IP address on the login events. In the new events, I haven't seen a case yet where the IP address is missing. The new events also seem to have better consistency.

One of the nicest enhancements is in the eventviewer GUI interface. If you've got a bunch of the same event filtered there, and you want to look at the same part of the event message body, you can scroll to the right part on the first message, then you can browse through all the messages, and the message body stays focused on that same area instead of jumping back to the beginning of the body and forcing you to scroll back down for each.

Another area which changed quite a bit is Directory Services auditing. So in events which are generated out of this category, both the old and new value for stuff that changes is logged. And what is logged makes sense:
-for multi-value attributes, only what changes is logged,
-for new objects only the initial values are logged
-for moved objects the paths are logged
-for undeleted objects, the new path is logged

You can view more details about the DS auditing changes at http://technet2.microsoft.com/windowsserver2008/en/library/a9c25483-89e2-4202-881c-ea8e02b4b2a51033.mspx.

Of course, to see the DS audit events, you'll need to enable the right category (or subcategories), and also set a SACL on the directory object(s) you want to see those events for. A SACL is the part of the security descriptor which tells the host who to generate audit events for, with respect to that resource object. In general, you typically set a SACL to Everyone when you want auditing for a given object.

There is also a little-known Special Groups auditing feature that came with Vista. This feature allows you to specify a list of SIDs, and when a user logs in with a token that has one of those SIDs, then a special event is raised. You might use this feature to keep track of where sensitive accounts were being used, and to help ensure that they weren't used in the wrong place. See http://support.microsoft.com/default.aspx?scid=kb;EN-US;947223 for more on this.

To see more details about all the eventlog related changes that came with Vista, see http://technet.microsoft.com/en-us/library/cc766042.aspx.

On the management side of things, Microsoft added the Audit Collection Services (ACS) product to the Systems Center Operations Manager (SCOM) product.

ACS allows you to collect security events centrally to a SQL database, run reports on those events, and alert on serious issues. ACS receives those events from the same layer as the eventlog (i.e. ACS doesn't get events from the eventlog alone, but instead from the same source that the eventlog gets them from). So if a hacker clears your eventlog to cover his tracks, ACS still gets a copy of the events. ACS also provides filtering capabilities, so if your audit policies are noisy, or you don't really care to collect/report on certain kinds of events, you can filter them from getting into ACS. In many ways, ACS is the syslog daemon Windows equivalent for which we Windows admins have always been envious of our unix brethren.

So, lots has changed in this space, and I'll likely have more posts about auditing in the future.

Engineering4/13/2009 10:28 AMBrian Arkills
  
This blog post focuses on the new OS platform and is based on information I've picked up from Microsoft via various sources. I suspect James or others might chime in on this topic to fill out details or fill in bits I don't cover here.
 
This posts mentions the following features:
  • Managed service accounts
  • Applocker
  • BranchCache
  • AD recycle bin
  • Authentication mechanism assurance
  • SAML2 support via ADFS
  • AD web service
  • AD powershell module
  • Admin center
  • Offline domain join
  • Group policy enhancements
  • Bitlocker usb drive encryption
  • Remote desktop
  • Multiple active firewalls
  • Support for DNSSEC
  • Certificate Services enhancements
Among these features, my favorites are Managed Service Accounts and Applocker, so make sure you read about those.
 
Managed service accounts
So you setup a Windows service. And it needs its own user account. And so then you share that password with ... uh ... all the people who might possibly need it to support that Windows service. And then at some point, you change the password. And then you ... uh ... try to figure out all the people who might possibly need the password again. And when one of those people leave, you of course change all the service account passwords they had. Right?
 
Managed service accounts promises to make this craziness a thing of the past. Here's how it works:
 
You create a new managed service account. You assign it to a *single* computer. Then, on that computer, you instruct the service to use that account. But you don't give that service the password; you have no idea what the password is. And no one else does either. And that is a good thing. :) The computer reaches out to AD and says, 'hey, I've got this service and it thinks this account should work.' AD says, 'oh, yeah, your computer has been assigned to handle that managed service account. Here's the password. And you'll need to request an update to the password in 30 days.' So this works just like computer account password management, except now you can use it for services, and you don't have to share that same security context across an entire computer like with LOCAL SYSTEM or NETWORK SERVICE.
 
I think the trick here is that they are leveraging the shared secret of the computer account password in order to safely communicate the shared secret of the managed service account. And this explains why that managed service account can only be associated with a single computer.
 
Oh ... and did I forget to mention that the SPNs for these accounts are also automatically set correctly? Makes these doubly useful for SQL and other services where getting the SPNs right is work.
 
Very cool stuff. I know I'm looking forward to shrinking the lengthy list of passwords I have to drag around with me. And I'll be able to sleep better when someone decides to leave. :)
 
Applocker
You've probably seen the Software Restriction Policies settings in group policy. Those are the ones where you can prevent malware.exe from running. Not horribly useful, 'cuz the bad guys just have to rename the executable, and they are back in business.
 
This feature replaces that functionality, and looks much more useful. With this feature, you flip the thing backwards, and define a whitelist, and it works by using a digital signature derived from the executable, not by matching the filename. Looks very useful for kiosks, high-risk workstations, and application servers where the number of administrators is high and your change management process has lots of holes in it.
 
BranchCache
Cache http or SMB content at a branch site. Only works for read-only traffic; write traffic still must go to source. This feature can be configured to work where content resides on a single host, or across all your branch's hosts. Think BitTorrent. Yes, this is a peer to peer technology, used for goodness. Requires ws2008r2 and windows 7 to use.
 
AD recycle bin
Yep, that much desired feature you've dreamed about. Someone accidentally deletes an object in your AD and you want to get it back without going to the extreme hassle of directory services restore mode and authoritative restores. This feature gives you the ability to easily recover without that hassle.
 
Authentication mechanism assurance
Ever wanted to restrict high risk resources so that another authentication factor was required to access them. But only for those resources? And you didn't want to trot out a special server just to host those resources? And you also didn't want to trot out a separate authentication service?
Well, this feature will meet that need. At login, if a user authenticates via a method that additionally provides a private certificate, e.g. with a smart card, then they'll get a login token that has an additional group sid on it. That group SID can be used to ACL those high risk resources. If the same user logs in w/o the smart card, then they won't be able to access those same high risk resources.
 
SAML2 support via AD Federation Services
This feature allows Shibboleth and other federation technologies to more easily interoperate with Windows. This is a big deal in the federation space, and opens the door for quite a bit more use of federation from Windows based websites. You'll likely hear more about this here on this blog in the future.
 
AD web service
Your ws2008r2 DCs will have a web service listening on port 9389. This web service enables new powershell services, the new AD admin center, and sets the stage for future AD features.
 
I'm personally not thrilled about this "feature", but I guess I'll get over it.
 
AD powershell module
This makes it easier for admins to script against AD. You could already script against AD, so this isn't a significant increase in functionality, just a new venue/language to do it with.
 
Oh ... and powershell is now *part* of the operating system. No need to download and install it; it's there. And it'll be automatically patched via the usual MS mechanisms, instead of requiring separate downloads.
 
Admin center
Active Directory Users and Computer (ADUC) gets a makeover and a new acronym (ADAC). New features include:
  • management across multiple trusted domains in the same console
  • ADSI Edit like view of *all* attributes
  • a different interface which appears more usable than ADUC
Offline domain join
Join a windows 7 or ws2008r2 computer to a domain when it isn't able to talk to a DC. Apparently this works by precreating the computer account, then writing to a special file the info the computer needs to complete the process when it does come online. So presumably there is a shared secret in that file to get the two working together. One major upside of this feature is that no reboot is necessary to complete the domain join, so your build process might use this feature to reduce the time/steps necessary to reach completion.
 
Group policy enhancements
 lots of new settings
 no schema update required, no special DC requirements to use new settings; some settings only work with windows 7/ws2008r2.
 -new preference setting types expand what you can do with them. consider replacing your login scripts with group policy preferences. MS claims they have been able to replace login scripts in more than 90% of cases with preferences. and the upside is that your login times should improve drastically.
 -Support for auditing subcategories is added. See my post on auditing (coming next) for more details on this.
 
Bitlocker usb drive encryption
yep, encrypt your usb drive so that what happened to the British government doesn't happen to you. and if you work in a high risk area, force all usb drives to be encrypted or be unusable.

Remote desktop

hey, no more calling this terminal services. it's all remote desktop--even on servers. a few minor things change here, like:

  • support for multiple monitors !!
  • no more automatic license server; you must manually specify a license server
DirectAccess
Domain computers which are offsite but online can be configured to meet your policies with regard to security (e.g. must apply patches, must run firewall, must have AV installed), and also route traffic destined for your network through your DirectAccess server (like a VPN), while sending all other traffic through the local gateway. This feature requires IPv6, but can be configured to use IPv6 transition technologies.
 
Multiple active firewalls
With prior OSes, you could only have a single active firewall configuration. Now you can have one active firewall profile per type of network (private, public, domain). Very useful for mobile clients.
 
Support for DNSSEC
The ability to sign DNS responses is becoming more critical as the bad guys look to leverage man-in-the-middle, spoofing, and dns cache poisoning techniques in order to compromise your hosts.
 
Certificate Services enhancements
A Windows-based CA now will support enrollements *across* forests. Unfortunately, this requires a 2 way trust.
 
You can also do enrollment over http now, via a new Certificate Enrollment web service. So if you've got a firewall border which allows web traffic, then your remote clients can still enroll, update, etc.
Engineering4/13/2009 7:48 AMBrian Arkills
  
I'm sure by now everyone recognizes the power of WMI and has come to despise the WMI commandline tool, particularly when dealing with remote systems, or having to constantly write scriptlets to do simple things or get information.  Something I hadn't realized til just recently was just how cool the combination of PowerShell and WMI is.  Take for instance, the need to figure out what a remote system is using as it's DNS servers.  With WMI and PowerShell,it's a mere two lines from the command prompt (the first gets the remote object, the second iterates through and prints the name of the adapter and DNS servers set for each adapter):
 
$remoteSystem = get-wmiobject -computerName <remoteSystem> -class Win32_NetworkAdapterConfiguration
$remoteSystem | foreach-object {$_.Caption;$_.DNSServerSearchOrder}

An exercise for the reader: filter the list so you only get the active wired/wireless adapters.

NB: this does not require PowerShell 2, get-wmiobject is remoteable in PowerShell 1.

Engineering4/8/2009 12:51 PMJames Morris
  
A minor update ...
 
As mentioned in the announcement about moving the DCs to p172, we're in the midst of rebuilding and adding additional domain controllers to UWWI.
 
Last week, we added yoda as a new, additional DC.
 
Today, we demoted lando, in preparation of rebuilding it with WS2008, and re-promoting it afterward.
 
We'll also be adding obiwan as a new, additional DC soon.
 
And later we'll be demoting chewie and luke, rebuilding them with WS2008, and re-promoting.
 
None of this activity will result in an announcement or outage notice, as none of it should be a user-visible.
 
However, when we are done with all this activity, we will be making an announcement prior to moving the domain and forest to WS2008 functional level, as this enables new functionality that y'all might care about.
UW Infrastructure3/18/2009 3:30 PMBrian Arkills
  

I should start by saying that I really like to work with Active Directory. And I freely admit that I'm somewhat rare, being more familiar with LDAP than your usual geek. But regardless, I think more people should be using ldp.exe.

If you've worked with Active Directory for very long, you know that the usual mmc snap-in tools leave a lot to be desired. The biggest problem with them is that they regularly hide information from you  in the interest of "helping" you. And sometimes, in the interest of making stuff more fool-proof, they arbitrarily limit what you can do. In general, I hate most of the AD mmc snap-ins. I will use them occasionally, especially for doing ACL work, because the alternatives for doing ACL work are very, very ugly. So in my opinion, they are good for a few things, but in general, I use ldp.exe instead.

Ldp.exe takes a bit of getting used to, and is not for your general casual admin. If you only occasionally need to adminstrate AD, then ldp.exe might help you out of a rough patch, but it likely won't be something you'd generally use.

Ldp.exe takes a more LDAP centric approach to AD. You connect, you bind, you execute other LDAP operations. You have access to specify LDAP controls that modify what the basic LDAP operations do. You have the ability to specify which attributes are returned, and the ability to directly set a filter so you can view objects which are in many different containers at the same time (unlike ADUC).

One of my favorite things about ldp.exe is that it enables me to see what is happening beneath the surface. And if I can see what is happening beneath the surface, then I am better able to understand what mechanisms are involved in any given technology, and better able to troubleshoot problems. It removes the blinders that the other mmc snap-ins throw on.

Now ... that removal exposes a lot of info, some of which is not especially useful. But you'd be surprised at how much of the info that is typically hidden by say ADUC, is very useful. For example, pwdLastSet. I find it very useful to know when someone last set their password, especially if they are claiming that they just set their password and it doesn't work anymore. Does ADUC tell me this? And badPasswordTime tells me when the last unsuccessful password happened, which might help me in the above scenario to determine that the user is mistyping their username or the domain. Again, you won't see this info in ADUC.

As you become more aware of what is under the surface, you'll begin to find that there are ways to accomplish tasks that the mmc snap-ins won't allow. For example, if you want to configure an account for Kerberos delegation, specifying that it is permitted to delegate to a service on a computer that is outside your forest, you are left high & dry by ADUC. But by paying attention to what is under the surface, you see that the msDS-AllowedToDelegateTo attribute is where the trusted delegation information is stored. And so you can directly modify that attribute, adding the values needed.

But one of the most beautiful things about ldp.exe is the ability to find all the objects which meet some specific criteria. Say I want to find all the objects which have a uid set. Can I do that with ADUC? No, because uidNumber was not included in the advanced find functionality. But with ldp.exe I simply set a filter of (uidNumber=*), maybe specify that I only want the DN attribute (so I'm not deluged by too much info), and I see the list of all the objects with a uid. ADUC so rarely has what I want in its search options that I don't use that functionality of it at all.

Another one of the things I like about ldp.exe is that it allows me to find out the critical bits so I can write code which might do something useful. Granted, not everyone writes code, and certainly not many people write code against AD. But if you are, I can't imagine getting along without ldp.exe.

You can also use ldp.exe to connect to other LDAP directories which aren't AD. For example, you might want to connect to the UW whitepages directory. Or to the UW email forwarding directory. More related to this below ...

I should say a few things about some other tools.

Adsiedit.msc is nearly as useful as ldp.exe. It also gives you increased access to all the info. And it comes with the GUI ACL interface, which can be very useful, especially if you have a security problem in your configuration partition (rare, but it happens). And you can use it to enumerate all the *possible* attributes for a given object, which is a much harder task via any other tool. But it lacks the searching power, and configuration abilities that ldp.exe so I only call upon it occasionally. But I don't sneer at it the way I do at ADUC. :)

A short time ago, Mark Russinovich released an AD management tool called AD Explorer. It's interesting, in that it allows you to work with multiple domains, even across forests, at the same time. But I find that it has a sql-based approach, and this tends to limit it's functionality. I find that it is much slower that ldp.exe. It does simplify some things, but ultimately, I gave up on it.

I tried Softerra's LDAP Browser 2.6 awhile back. It also allows you to work with multiple domains or LDAP directories. It does have a LDAP based approach, but I didn't really like the way information was returned. My main desire in trying this tool was to see if it supported certificate-based authentication.

Which brings me to a final point. None of the tools I've mentioned provide certificate-based authentication. As you might know, both PDS and GDS require cert-based authentication. To my knowledge, there are no free Windows-based GUI tools that provide cert authN support.

As I've developed stuff which synchronized with PDS and GDS, this gap has driven me a bit crazy. For the longest time, I'd use visual studio, via the .net code I had developed to access PDS and GDS for troubleshooting and lookups. Then one day, I realized that I could make my own tool which addressed this gap. So I did. At this point, it isn't very fancy, and it certainly is not a GUI-based tool. What it is, is command-line based, and Windows platform based. I'd be happy to share this tool (or the code) with anyone who has need of something like this. The tool or code does not magically give you access to GDS or PDS, however. You will still need to request access via a certificate, and run the tool from a computer that has that cert installed (with access to the private key granted to the user running the tool).

UW Infrastructure3/10/2009 12:03 PMBrian Arkills
  
Quite some time ago, I wrote a webpage about this process after Scott Barker and the iSchool piloted it. But in the course of time, that page has fallen into disuse mostly because it got lost in various linking shuffles.
 
 
And the info in it is still mostly valid.
 
But to keep things fresh, I thought I'd review here what we did in the recent UWWI p172 change.
  1. Most DNS A records have an 86400 TTL, i.e. 24 hours. So one day beforehand, set down TTLs on key A records from the default of 24 hours to something quite low, i.e. the A record for each dc and the A record for the Windows domain itself e.g. luke.netid.washington.edu and netid.washington.edu.

    The SRV records and CNAME records for a Windows domain all point at the A records so no changes needed there (yet).

    This is the step I messed up on, and the reason why the work was delayed one day. :(
  2. Find p172 addresses for each DC. The p172 equivalent IP may not be available, so you may have to find another open IP on the p172 equivalent network.

    P172 equivalent networks are:
    128.95.x      -> 172.25.x 
    128.208.y    -> 172.28.y 
    140.142.z    ->  172.22.z 

    Ask NOC to reserve these IP addresses, and make sure they are available to make the urgent DNS change you plan on making.

    Again, this is a step I messed up on, assuming the p172 equivalent IP would be available (for lando). :(
  3. One day later. RDP to each DC. Add p172 equivalent address (and p172 gateway). Disconnect.
  4. RDP to p172 address you just added (NOT the DNS name). Remove public address (and public gateway). RDP session will "flash" when you click OK on network settings.
  5. Send request that all the A records be changed to the new p172 address, and that all SRV and CNAME records that reference those A records be moved to the internal only/ private DNS zone file.
  6. Wait for changes. Use dig to verify changes have happened.
  7. Reboot each DC, being careful to have only one DC down at a time.

Alternatively, you might move one DC to p172 at a time, asking for DNS changes between each move. This would be a lot more complicated though, because there will be changes in the public DNS zone and changes in the private DNS zone, and any given SRV/CNAME record will have different states in those two zone files. In other words, this added complexity is likely to mean more opportunity for mistakes. So I'd advise against it.

You might also swap the order of steps #4 and 5, taking care to RDP into all the DCs via the p172 address first. This might provide a better client experience.

Windows domains with domain-based DFS will want to schedule this work for a time where clients are less likely to be accessing network files.

Of course, if you have trusts with other domains/forests, and they have firewall rules, you'll want to keep them in the loop.

And that's it. Enjoy!

UW Infrastructure2/19/2009 11:25 AMBrian Arkills
  
3/4/2009 Edited list of out-of-box management agents
 
So recently I was able to take a training class on ILM. This post contains some of the core concepts and info I learned in that class, along with a few interesting bits about our ILM implementation here. I've written once before about ILM here, https://sharepoint.washington.edu/windows/Lists/Posts/Post.aspx?ID=26, mostly as an intro without any technical depth. This post will dive much further into the technical details.
 
In general, ILM is a directory synchronization tool and a //certificate management and deployment tool. Since directories commonly hold identity information, this set of functionality neatly rolls up into the product name. ILM is commonly used to provision user accounts, and has some capability to manage password synchronization.
 
In a short time, Microsoft will release a new version of ILM, currently code-named ILM2. This product has a web portal front-end to it (more on that in a second), workflow functionality, and some very cool cross-product tie-ins. This combination of new features allows very cool new functionality, such as allowing you to provision groups that are dependent on something else. Imagine you know you need to be in a certain group to get access to something. ILM can capture your request via its web portal, or even via an Outlook snap-in. It then sends an email request along to the designated admin for the group, asking if you should be added. If the admin has the outlook snap-in, they can approve the request from within outlook. If they don't use outlook, they can follow a link in the email to the web portal to approve/deny. I don't know whether we'll ever see this feature set of the product in use here at the UW, but it is fun to imagine. :)
 
Anyhow, let's move back to the core product.
 
ILM is a state-based synchronization product. This means that it reads in the state of all the various data sources, compares them determining what has changed, and then acts on just the changes. As you might imagine, this has both advantages and disadvantages:
 
Advantages:
  • it isn't dependent on a specific order of events
  • it isn't reliant on installing agent code at each data source to send data to it, and any unreliability/security issues such agent code might bring with it
  • it can enforce that the state be kept as expected
Disadvantages:
  • Requires processing power to evaluate state differences
  • the timeliness of changes aren't necessarily as good as an event-based process, i.e. there is some synchronization latency
 
ILM comes out of the box with a wide variety of management agents for common directory and data source products. An ILM management agent is responsible for managing the flow of data to and from a data source.
 
These include:
  • AD
  • ADAM
  • AD Global Address List
  • Attibute-value pair (AVP) file-based
  • Delimited text file
  • DSML
  • Exchange 5.5
  • Exchange 5.5 (bridgehead)
  • Extensible connectivity
  • Fixed-width text file
  • IBM DB2 Universal Database
  • IBM Directory Server
  • LDIF file-based
  • Lotus Notes
  • Novell eDirectory
  • Oracle
  • SQL
  • Sun and Netscape directory servers
  • Windows NT 4.0
There are also 3rd party management agents.
 
Conceptually, ILM has 2 object spaces you need to understand. Each management agent has its own connector space (CS). This includes all the data source objects for that management agent. And then there is a single metaverse space (MV). The metaverse space represents those connector space objects which have been projected or joined (more on what those two new words mean in a minute). In other words, the metaverse is the space where things come to together.
 
Each management agent defines which objects in its connector space should be projected or joined to the metaverse. To project means that the resulting object in the metaverse will consider this management agent's object as authoritative for that object. And if you have no management agent's projecting, then your metaverse will be empty. Put another way, projecting is the way to provision objects into the metaverse, and for each metaverse object there is a special relationship dependent upon which management agent or agents projected it. To join means what you might imagine; it connects an object in this management agent's connector space with objects in the metaverse. Both projection and joining are dependent upon filters and rules that determine which objects should do what, and in what way. A projection filter determines which objects should project. A projection rule determines which connector space attributes should map to which metaverse attributes. A join filter determines which objects should attempt to connect to which metaverse objects. A join rule determines which connector space attributes should map to which metaverse attributes. Both join rules and projection rules can go either direction. In order to achieve some synchronization, you'll need rules that go both in and out of the metaverse.
 
This leads us to schema, i.e. the definition of what kinds of objects there are and what kind of data can be associated with each object. Each data source comes with its own schema. Each management agent (and therefore its connector space) has a schema (which may or may not match the data source). And the metaverse has a schema that somehow melds all of this together.
 
Without any special skills, one can directly map an attribute in one connector space to an attribute in the metaverse. If you need to make any sort of changes, have any kind of logical dependency, then you need to use an extended rule, which involves writing some code. Which is not especially hard.
 
Moving back to concepts, the way in which data is moved around and synchronized is all tied to the management agents. Each management agent has a set of run profiles.
 
Each run profiles can be one or more of the following actions:
  • Delta import (staged). Evaluate only changed objects in data source, and make changes only to CS.
  • Full import (staged). Evaluate all objects in data source, and make changes only to CS.
  • Delta import and delta sync. Evaluate only changed objects in data source, make changes to CS, then evaluate projection/join rules for only those CS objects which changed, make resulting changes to MV, follow any external attribute flow rules making changes to other management agent's CS.
  • Full import and delta sync. Evaluate all objects in data source, make changes to CS, then evaluate projection/join rules for only those CS objects which changed, make resulting changes to MV, follow any external attribute flow rules making changes to other management agent(s)'s CS(s).
  • Full import and full sync. Evaluate all objects in data source, make changes to CS, then evaluate projection/join rules for all CS objects, make resulting changes to MV, follow any external attribute flow rules making changes to other management agent's CS. A full sync is needed after a rules change to apply these new rules to CS and MV objects which didn't change during a delta.
  • Delta sync. Evaluate projection/join rules for only those CS objects which changed, make resulting changes to MV, follow any external attribute flow rules making changes to other management agent's CS.
  • Full sync. Evaluate projection/join rules for all CS objects, make resulting changes to MV, follow any external attribute flow rules making changes to other management agent's CS.
  • Export. Push *all* CS objects to data source per external attribute mapping.
So imports move data into the connector space.
 
Syncs move data from one connector space into the metaverse and back out to other connector spaces. Note that if you have many management agents, you need to run a sync on each of them to achieve complete synchronization.
 
Exports move data from the connector space back to the data sources. Exports are the only way to achieve some kind of change in the world outside ILM; without them, you are just playing around in an ILM universe.
 
We rolled ILM out depending heavily on a Microsoft Consulting engagement to get things in place so the UW Exchange deployment could have an address book which didn't look rotten.
 
Our existing implementation of ILM connects an AD with an OpenLDAP directory, or more specifically UWWI with PDS. As you might have noticed, OpenLDAP wasn't one of the out of the box management agents listed above. At the time of our implementation, there was a 3rd party management agent, but it had known problems which made it not acceptable for our deployment. Since then, an open-source openldap management agent which addresses those problems has been released, but we haven't had a chance to evaluate it.
 
Our deployment uses an extensible management agent built around the LDIF management agent to connect to openldap. Our openldap directory dumps change logs to ILM, and ILM reads those in delta imports.
 
In the existing implementation, AD projects to the MV, and PDS joins. And only objects from PDS which are of class uwPerson are joined. The attribute flow is mostly online already, so I won't go into that. Our AD has about 450000 users, while PDS has about 1.5 million objects.
 
One of the gotchas in our existing implementation is the high number of disconnectors. A disconnector is any connector space object which is not related to a metaverse object (i.e. joined or projected). The problem with lots of disconnectors is that every time you do a sync, *all* disconnectors are re-evaluated to see if they now join up. I'm in the process of investigating whether some design changes wouldn't eliminate that problem in our implementation, and allow us to go from a 3 hour sync cycle to something much shorter. Currently the delta imports take about 5 minutes, while the delta sync takes 90 minutes.
 
Another gotcha is that while the MS consultant refined the rules a couple times before he left, it doesn't appear that he ran a full sync after those rule changes. This means that only objects which have changed since then have had those updated rules applied. So there is inconsistency in UWWI in terms of what should be there. Now I may be the only one who has come across examples of this, but it really bugged me until I found out why.
 
The final notable gotcha in this space resolves around the "name" attributes. And there's a complex story here, which for the sake of my sanity, I am not going to go into details about. The problems here are:
  • the mapping logic is inconsistent depending on the initial state,
  • not all UWWI objects get joined to a PDS object (b/c of the uwPerson join rule),
  • the input validation and formatting from the source systems to the PDS attribute we use is non-existent,
  • and our mapping logic code falls short of addressing all cases (but there's no way you can address all possible cases given no input validation and inconsistent formatting).
Oh ... and only employees have any real ability to change their name.
 
We've got an imagined fix for all of this, where the 'manage your uw netid' page would allow *every* uw netid to manage their name information in a consistent format, with input validation that would then flow through, but it hasn't gotten enough priority to be resourced.
UW Infrastructure2/12/2009 1:27 PMBrian Arkills
  

A Windows SIG is kicking off! Details below.

 

Nathan & I will be presenting, and as a teaser, take a look at this new architectural picture. I'm hoping to have some UWWI stats done in time for the presentation too.

---

Windows Admin SIG Meeting:  “UWWI:  What's in it for me?”

 

What: Windows Administration Sig Meeting

Where: Allen Auditorium, Allen Library

When: Wednesday, January 28, 2009

Time: 3:00PM-5:00PM

 

Please RSVP to coston@u.washington.edu

 

This will be our official Kick off meeting, so make sure you’re there! 

 

To kick things off right, Brian Arkills and Nathan Dors will be presenting on UW Windows Infrastructure (UWWI) and the UW Groups service.  This is intended to be a highly interactive discussion, so bring your questions and use cases! 

 

More information about the UW Windows Administration SIG is available at https://sig.washington.edu/itsigs/SIG_Windows_Administration.

Join the discussion by subscribing to our Mail List at http://mailman2.u.washington.edu/mailman/listinfo/mswinadmins.

UW Infrastructure1/15/2009 2:50 PMBrian Arkills
  

Since the uw exchange project, uwwi has had a development environment, but it's been at best a very poor facsimile of the real thing.

More recently, I've been pouring time into making it a more realistic environment for testing our core components.

From time to time, we must make changes to the core infrastructure components:

  • the domain controllers themselves, whether that's the operating system, or other significant changes
  • fuzzy_kiwi, the account provisioning agent
  • slurpee, the group provisioning agent
  • subman, the service lifecycle agent
  • ilm, the directory synchronization agent

Testing those changes, up until very soon in the future, has been hard, and involved finding test cases within the production environment.

Of course, some changes are of a nature that you can't test them in production--which is why a development environment is required.

In fact, there are a slew of ilm changes queued up in my task list which are blocked because I currently have no safe way to test them before implementation.

We plan to upgrade the UWWI DCs to Windows Server 2008 sometime in the coming months, but first, we wanted to test that WS2008 didn't cause any problems with our core infrastructure components. Would fuzzy_kiwi run on WS2008?

Getting fuzzy_kiwi installed and running in a separate domain instance was an adventure of its own, because there was no existing documentation on getting it running (there is now). But I'll skip that story. :)

However, there are a couple interesting things here: I've made some key changes to fuzzy_kiwi so that it is now self-aware of where it is running. If it detects that it is running in our development domain, then it does things differently. Otherwise, it acts normally. In our development domain, fuzzy_kiwi creates accounts disabled and ignores the password it is given. Instead, it asserts its own very long, random password--and each instance has a new random password. That was a trick I'll get to later. There is also a new feature in fuzzy_kiwi where some accounts can be 'untouchable' by kiwi requests. This is needed especially in dogfood where you want the administrators to have different passwords on their accounts and you don't want those accounts to be disabled. It wouldn't do to have the only folks who can make changes locked out of that domain. :)

Getting back to the random password in our dev environment feature, I made a few interesting discoveries about coding such a thing. I knew from my own math and computer science coursework background that generating truly random numbers was a very difficult thing. And generating a random password is at its heart all about generating random numbers. Without going into the details of the algorithm I used (that wouldn't be very smart, now would it?), I do want to make a few remarks about some of the building blocks.

Within the .net framework, I came across the System.Random class and its System.Random.Next method as a way of generating random numbers. The class and method are very easy to use, and even give you a way to specify a lower and upper bound on the random integer returned. It wasn't until I started looking at what it generated that I saw a significant problem with the class: by default, it generates exactly the same sequence of "random" numbers on successive runs (within a suitably short period of time). This is because the algorithm used behind the class focuses on randomness within the sequence it generates--not randomness of what is used to initially generate the sequence. By default, the class "seeds" the sequence by using the tick count of the time you instantiate it. But in practice, that means that there is often duplication on subsequent runs. You can supply your own "seed", but then you are stuck a circular problem: generating a random seed so you can get a random number.

So I set off looking for something better. And I came across the System.Security.Cryptography.RNGCryptoServiceProvider class and its System.Security.Cryptography.RNGCryptoServiceProvider.GetBytes method. This class is a bit harder to use than System.Random. It requires an array of bytes as a method of data output. Output is a random number from 0-255 (it's a byte after all), and there's a slight variant method of .GetNonZeroBytes which outputs random numbers from 1-256. This option doesn't allow you to ask for lower and upper bounds, so you end up performing modulo operations on the output (assuming you want something smaller than 256) and addition/subtraction to fit your needs. From what I've seen the numbers generated are pretty random, and there isn't duplication on successive runs like with System.Random.

This post is probably beyond what most folks are interested in, but you never know what will be useful, or be a catalyst to generate that critical feedback loop that brings in vitally relevant information. I'm likely to have more technically detailed and arcane posts in a vein similar to this in the future. In an effort to balance this likely trend, I'll try to keep the more technically heavy content later in the post, and keep the more widely useful and relevant information near the top.

UW Infrastructure12/11/2008 8:49 AMBrian Arkills
  
In response to lots of questions about how to correctly license Sharepoint, our own Scott Barker helped put together a useful conference call with Microsoft licensing experts.
 
What follows are my notes on what we heard on that conference call. These notes should not be taken as fact--you should verify their veracity for yourself with Microsoft.
 
The licensing expert began by explaining the usual way that Microsoft server products are licensed. Generally, you buy a server license, client access licenses (CALs) for each "internal" user, and an External Connector license for each "external" user.
 
For Microsoft Office Sharepoint Server (MOSS), if you use this traditional approach, it amounts to:
  • Windows Server license
  • MOSS license
  • Windows Server CALs for internal users
  • MOSS CALs for internal users
  • MOSS External Connector for external users
  • SQL Server license
  • SQL CALs for all users (or per processor CAL licensing)
Instead, Microsoft has introduced new specialty licensing for MOSS. The "Sharepoint for Internet" license covers:
  • MOSS license
  • Windows Server CALs for internal users
  • MOSS CALs for internal users
  • MOSS External Connector for external users
In other words, this specialty license is a blanket covering all the licensing, except the SQL licensing and Windows Server CALs.
 
The licensing expert had a couple clarifications on questions that were asked.
 
Internal users were defined to be the users we consider employees.
External users were everyone else.
 
The licensing expert said she would consider our organizational boundaries to include *all* of UW, although I have some serious doubts about whether that was entirely accurate (or relevant), but wasn't able to follow up in enough detail to get those doubts clarified.
 
This specialty license does not cover Windows Sharepoint Services (WSS), the version of Sharepoint that comes bundled with Windows Server. WSS licensing follows the Windows server licensing model, so you'd need:
  • Windows Server license
  • Windows Server CALs for internal users
  • External connector (or individual CALs for each) for external authenticated access
  • Anonymous access requires *nothing* other than Windows Server CAL
Sharepoint11/17/2008 9:10 AMBrian Arkills
  

So as part of a recent Datawarehouse initiative here at the UW, there's been quite a bit of activity around Windows authentication delegation, sometime more well known as Kerberos two hop authentication. I know the Law School has been using two-hop authentication for awhile now, and recently had a problem so I think this post is likely relevant to quite a few.

To explain what two hop authentication is, we'll need to jump back to Windows authentication basics to make sure we are all on the same page. If you already understand it, then jump down to "The Meat".

So when you login, you give your password (or some other credentials) to the lsass.exe process on what is usually a (physically) local (to you) computer. The lsass.exe process on your computer hashes some other info (a timestamp) using the password to create the hash, then sends that hash over the wire to a domain controller for verification. Note that the info on the wire doesn't contain any form of your password. The domain controller compares that hash to what it expects, and if successful, passes back a login token that can be used. Depending on the details of the authentication scenario, that login token might have additional stuff (usually domain local & local groups) added to it before you receive it. Then you can use that token to access stuff on the local computer and over the network. The reason you can use it to access stuff off the local computer is because the token itself has been marked as re-usable, and the local lsass.exe process considers that mark as inviolable.

Sometimes you access stuff over the network, and you are challenged for your credentials. When that happens, you actually do send your password over the wire, and the lsass.exe process on the remote computer takes your password, does the same dance with the domain controller, *except* this time the token doesn't get the re-usable mark. This is because that remote computer doesn't need your login token except for resources local to it. It uses that token, and we say the remote computer is impersonating you to gain access to resources (local to itself) on your behalf. In Windows terminology this is called Impersonation. Impersonation can also happen without a password challenge, and in that case, your local lsass.exe which has a re-usable copy of your login token passes that token to the remote computer, which then uses that token to ask the domain controllers for a non-reusable token. You might also think of this scenario as one-hop, as the login token is one hop removed from where the user physically is.

Now, say that the remote computer needed to access network resources that aren't local to it, as you. That's the scenario we are concerned with here. If it helps, imagine a web service that needs to access a sql service as you to provide the right data. In this two hop scenario, you pass your creds (either password or login token) to the first remote server. That remote server has something special about it. The user account that is running the network service process has been granted a special ability called delegation. The user account might be SYSTEM, in which case the user account is the computer object in Active Directory, or it might be some specific service account. Using delegation, the first remote server can take the creds you've provided to get a login token that is re-usable. It then can reach out to the 2nd remote server, and provide a non-reusable token to access whatever it needs on that 2nd remote server. There are two levels of delegation: unconstrained delegation and constrained delegation. With unconstrained, the 1st remote server can get a re-usable login token that can be used to access *any* network services that token has access to. With constrained, the 1st remote server is limited so that the re-uable login token can only be used locally and with specific network services. Obviously unconstrained delegation is more secure and therefore preferable to unconstrained.

A few relevant factoids about delegation:

  • Delegation relies on Kerberos authentication. If you can't do Kerberos to the 1st remote server, then you can't use delegation to achieve the second hop.
  • Kerberos authentication relies on a bunch of pre-requisites, so it can sometimes be tricky to achieve.
  • You can have as many hops as you'd like, as long as each server in the chain has delegation privileges to the next server in the chain.
  • Granting the delegation privilege is practically an all or nothing thing. If you grant it to user serviceX, it means that *every* user who passes creds to serviceX will have a re-usable login token available to serviceX. If serviceX is insecure or not trustworthy, then really bad things can happen. Aside from the constrained level, there is one check on this privilege--you can mark certain user accounts as being "sensitive". This means that they can not be used via delegation at all. You will want to mark all your domain admin accounts as sensitive, and likely quite a few others too.

So that's the basics, and now we'll move onto the more interesting stuff.

The Meat

So I was saying that the datawarehouse project here has chosen an architecture design that relies on two-hop authentication. The primary components which do this are sql servers that via the sql linked server functionality bring data from many sql servers together into a view. Complicating this picture is the fact that our user accounts and the sql servers themselves are in two different forests.

We had a lot of problems getting this to work correctly. For Kerberos to work correctly, you have to make sure you have all the service principal names registered correctly. You also have to ensure all the computers are within a given time window. And that they all are trying to use Kerberos. And that you have a forest trust, not a domain trust.

In the course of all these problems, we finally asked PSS to come help us. They sent two consultants on site on two separate occasions, but both left stumped. We were left with all kinds of additional (undocumented) claims from the two different consultants, in some cases contradictory to each other.

We eventually figured out both the problems which were ailing us.

The most serious problem was that in marking a wide variety of accounts as sensitive, we (actually it was me) accidentally marked a special built-in Windows account as sensitive. That account is the KrbTgt account. This account has a very special function. It issues *every* login token for your Windows domain. So, obviously, it's very important. By marking the KrbTgt account sensitive, apparently every login token it issues is also marked sensitive. This is undocumented behavior, but from a logical perspective it makes sense. So for the span of about 3 months I can say with definitive authority that there was absolutely no delegation used from that domain--because every account was effectively marked as sensitive. Fortunately, not many folks are using delegation from that domain as of yet.

Note that for some domains this might be desired behavior, and that it's really a shame that this is undocumented behavior. I'd imagine that quite a few Windows security organizations might want to add this to their locked down configuration guides.

We also had a sporadic problem on certain servers with hostnames where the DNS suffix of those server's hostnames corresponds to a MIT Kerberos realm which happens to have a Kerberos trust from one (and only one) of the forests involved. That problem happens because Kerberos uses mutual authentication--meaning one computer verifies that any other computer it talks to is who it claims to be. For this, it uses what are called servicePrincipalNames (SPNs). But you have to find the right authority for a given SPN, and of course, the Windows logic assumes that the DNS suffix on a SPN is meaningful even though that isn't necessarily true. It turns out that if the servers involved have the registry keys needed to resolve the KDCs for a MIT Kerberos realm in this scenario, then Windows works as you'd like. In other words, if it can find the MIT Kerberos realm, then it can check it for the SPNs, find out that they aren't there, and then look elsewhere for the SPNs. But if it can't find the KDCs for that MIT Kerberos realm, then it gets stuck. Putting the registry keys for resolving the MIT Kerberos realm on all relevant computers is one fix, another is not using that DNS suffix in any server hostnames.

Put another way:

Windows domain blah.doodoo.com has a Kerberos realm trust with jojo.com. Windows domain blah.doodoo.com has a server named sql1.jojo.com in it. Out of the box Windows clients in blah.doodoo.com *can’t* negotiate Kerberos with sql1. Windows clients with the appropriate KDC registry keys referencing the Kerberos realm jojo.com *can* negotiate Kerberos with sql1.

In other words, because you have that Kerberos realm trust, you can’t plan on having Kerberos auth to any computers with a DNS suffix that matches that realm unless all your clients have got the KDC reg keys to that realm. Somewhere in the background it’s likely that there’s an error happening which won’t give up and allow the local Windows KDC to issue a TGS for a host with that DNS suffix, unless it can contact the external Kerberos realm KDCs to see if they have a more authoritative SPN.

If you do want to read up on this technology, my favorite blog site, the MS Directory Services blog, has a very useful post that you can add to your reading list:

http://blogs.technet.com/askds/archive/2008/06/13/understanding-kerberos-double-hop.aspx

UW Infrastructure11/17/2008 8:13 AMBrian Arkills
  
Anyone else annoyed that your Exchange calendar doesn't show the UW holidays?
 
According to David Norton, our local Exchange expert, this is because for Exchange, calendaring is really just a system of decentralized email messages under the hood. This allows Exchange to be amazingly flexible (what other calendaring solution allows one person to use Gregorian while another uses Hebrew Lunar?), but it also makes centralized calendaring tasks like adding the UW Holidays to everyone's calendar hard.
 
I hear the Exchange service roadmap is looking at a future feature to address this, but in the meantime, I went out and created a solution for myself which just happens to be re-usable.
 
Exchange has a concept of being able to adding holidays to your calendar. You choose from a lengthy pre-defined set, and they are imported onto your calendar. Via Outlook, you can get there via Tools, Options, Calendar Options, Add Holidays. Hint: uncheck "United States" so you don't accidentally import the wrong set of holidays. Of course, Microsoft didn't come and ask the UW about when our holidays are, so we're not in the list.
 
But you can change the list.
 
Both http://www.slipstick.net/calendar/holidays.htm and
http://office.microsoft.com/en-us/outlook/HP012304061033.aspx describe how to create a custom list of holidays. This involves editing your local outlook.hol file at c:\Program Files\Microsoft Office\Office 12\1033\outlook.hol and adding the holidays in the right format.
 
I've gone ahead and done that for the 2008, 2009, and 2010 holidays (collected from http://www.washington.edu/admin/hr/holidays/holidays.html), and it works fine.
 
If you'd like to use the same solution, download my outlook.hol file, exit outlook, make a copy of your outlook.hol file (in case something goes wrong), copy in my version, then open outlook. Afterward, go back to the holidays, uncheck "United States" again (it's a persistently annoying item, isn't it?), and check the 3 new holiday items: UW Holidays 2008, UW Holidays 2009, UW Holidays 2010 and hit OK. Now your calendar should show the UW holidays.
 
It's not ideal, but it gets the job done, and I imagine the Exchange roadmap feature's solution will be very similar to this.
 
Exchange11/6/2008 11:04 AMBrian Arkills
  
This site is the convergence point of three prior Sharepoint sites.
 
Those prior sites were:

The convergence of these sites is possible because of a significant milestone for the UW Sharepoint service. That milestone is that there is a suggested cost for that service (which is pending an official announcement).

The process by which these sites converged is technical interesting, and is the topic of this blog post.

Generally speaking, you can export (backup to file) and re-import any Sharepoint site--MOSS or WSS--using special stsadm commands. And in fact, in this instance this is a convergence of two MOSS sites and one WSS site.

One special issue was that the Sharepoint server hosting the old sites was in a different domain. When this is the case, you must either choose to translate security references or drop your security configuration, and recreate it after the import. The translate security functionality has some gotchas. It is a per-user operation which means you've got to get a list of all the users. That can easily be done by running the export command once and looking at the log file. Another gotcha is that when you run it to convert say nebula2\user to netid\user it converts *every* instance of nebula2\user in that sharepoint farm to netid\user. This might be very disruptive. A final gotcha is that if the target user you want to convert to has logged in to the source Sharepoint farm, or if they've been granted permissions, then the operation can cause destructive behavior or error.

An example of the security translation command is:

stsadm -o migrateuser -oldlogin DOMAIN\user -newlogin DOMAIN\user [-ignoresidhistory]

The combination of these security translation issues left me thinking it was less work to just reconfigure security after the import . Fortunately, the blog site was on it's own server in the same domain as the UW Sharepoint server, so it didn't have any of these issues.

An example of the backup/export command I used is:

C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN>stsadm -o export -url http://viewpoint.cac.washington.edu/blogs/ms-collab -filename c:\finaltry\ms-collab.cmp -includeusersecurity

An example of the import command I used is:

C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsadm -o import -url https://sharepoint.washington.edu/windows -filename c:\temp\ms-collab.cmp -includeusersecurity

The import command will fail if you are not importing to a site with the same site template as the site which you exported from. This is a very important gotcha for when you are trying to converge several Sharepoint sites into one.

And in fact, this site template restriction was a problem for this convergence. The roadmap and wit sites listed above were both from the team site template, whereas the blog site was from the blog template.

To handle this, I decide that the blog site had more content which was harder to manually reproduce via another method, so I went with a blog template, and only imported that site via this method.

For the roadmap site, there were other issues. That site was a subsite of a larger MSCA site collection, and some of the content in the roadmap site had some cross-site lookups to data in that larger MSCA site collection. Those were minimal, and I decided that I could lose those lookups and convert that data to something more static. However, the roadmap site was fortunate in that it was basicallly just two lists. So I exported both lists to a spreadsheet, recreated the custom lists (i.e. recreated columns etc), then copy and pasted the info, massaging the cross-site lookup data slightly.

Finally, there was the wit team site which had a lot of internal documentation. I opted to create a subsite for that, limiting the access to it to internal team members. However, if the goal truly had been to converge, I could have exported all the documents and uploaded them.

So for two of the sites, I used an adhoc approach because it was good enough and was the least cost. However, there are other methods you can use when an adhoc approach isn't acceptable.

http://www.sharepointnutsandbolts.com/2007/10/stsadm-export-content-deployment.html has a very detailed comparison of ways to move Sharepoint content around that document some of these other approaches.

Sharepoint10/30/2008 1:17 PMBrian Arkills
  
I'm here at the Microsoft Management Summit 2008 in Las Vegas and Bob Muglia (Senior Vice President, Server and Tools Business) announced yesterday the beta release of Ops Manager 2007 Cross-Platform Interop.  In short, it's all about using Ops Mgr 2007 to monitor and manage non-Windows systems.  Their current priority of non-Windows platforms are  RHEL 5, SUSE 10, HP-UX 11, and Solaris 10 but they have plans to support many more platforms.  I'll post more details Thursday after a breakout session on it.
 
In the mean time, you can read more about it in the System Center Teams's blog post.
 
Oh...and the theme the Cross-Platform Interop team is using...flying pigs.  It's quite funny.
Engineering4/30/2008 9:00 AMJames Morris
  
So James Morris & I were engaged in a consulting gig for a UW client awhile ago. This involved helping them sort out how to migrate a set of servers between Windows domains, but not doing the actual migration. For free, we had given them some pointers on what to do, but for many reasons this help wasn't enough to produce a working solution that they could then use. And after engaging us on a consulting gig, for quite awhile, the problems were enough to stump both James & I.
 
So there are a couple general purpose tools for this kind of thing. ADMT (active directory migration tool) is an excellent tool, but takes longer to setup, and often has more features then you might want. ADMT is a free download from Microsoft. Subinacl is another excellent tool, takes little to setup, but may not do everything you want. Subinacl is also free, as part of the support tools in the operating system. The two tools have slightly different paradigms which is worthwhile background info to cover first.
 
ADMT uses it's own database to determine what should be reACLed. ADMT takes the list of all users and groups that you have previously migrated with ADMT and uses this list as a reference when it encounters entries in an ACL it is being asked to fix. ADMT uses the SID of those users/groups to keep track of those mappings. So if you have not migrated any users or groups with ADMT, then when you ask ADMT to translate security on a computer, it will effectively do nothing. This is one of the reasons why ADMT takes longer, and is more time-consuming. For user/group migration activities, ADMT requires admin level permissions in both domains, which can be a non-starter in some scenarios. For computer activities, ADMT only requires access to the ADMT database which did previously user/group migrations, and admin access on the computer(s) being translated and/or migrated.
 
Subinacl doesn't use a database to determine what should be reACLed. You have many choices. You can replace any given account (by name such as nebula2\barkills) with another account (e.g. pottery\barkills). Which isn't terribly efficient when there are lots of accounts. Or you can replace all accounts with a given domain with another domain. This latter option is much more efficient and there is next to no preparation needed. To do either, you must have access to the domains involved, because ACLs do have SIDs in them, not usernames or group names, and therefore usernames/group names have to be translated to SIDs before work can be done.
 
So ADMT was thrown out as a solution in this case because there was a small set of servers to migrate, and the users/groups involved had already largely migrated themselves already so there was no reason to do all the work required with ADMT.
 
Subinacl was the target solution. Specifically, the commandline:
 
subinacl /subdirectories c:\ /changedomain=pottery.washington.edu=nebula2.washington.edu
 
in other words, replace all instances of users/groups from the domain pottery.washington.edu with their exact username/groupname complement in nebula2.washington.edu for all files on the c:\ drive.
 
So for this client, they would run this command on a server in the pottery domain from the context of a nebula2 user account that had admin privs on that server.
So for this client, this command (and a similar one with the /migratetodomain switch), would always result in some kind of error. And this, of course, led to grey and fewer hair for James & I.
 
Errors we encountered included the following:
1722 Could not find domain name : nebula2
Error finding domain name : 1722 the RPC server is unavailable
1722 Could not find domain name : pottery
Error finding domain name : 1722 the RPC server is unavailable
1355 Could not find domain name : pottery
1355 Could not find domain name : nebula2
5 Access is denied
 
Early efforts focused on getting a domain trust setup, DNS entries correct, and testing connectivity to ensure that firewalls were not interfering. All of these were part of the problem, but once fixed, the errors continued.
 
The next hurdle was ensuring that the user account running the command had the ability to "enumerate" users on both domains. The combination of security policy settings like "network access: allow anonymous SID/Name translation=disabled" plus lack of any ACL permitting read access could add up to problems. But this wasn't the cause.
 
Next, James & I went on site together in the hopes that a physical visit would shake something loose. I guess something about the physical proximity worked for us both.
 
James had an idea which identified a new problem. Specifically, the policy on the pottery server was left at the default of windows 2003 server, namely that the LMCompatibilityLevel was set such that it wouldn't ever negotiate NTLMv2. This worked fine for the pottery domain, but Nebula2 only permitted NTLMv2. So subinacl couldn't use the same token for the process, and this resulted in the error above. Once we fixed the LMCompatibilityLevel, we ended up with a new error.
 
This new error brought no end of grief. And for quite awhile we were stumped. Then I stumbled into a solution. I modified the command slightly to:
 
subinacl /subdirectories c:\ /changedomain=pottery=nebula2.washington.edu
 
notice that instead of the fully qualified DNS name for the pottery domain, I used the netbios name for the pottery domain.
 
And this worked. We were so shocked that I think it took us a minute to realize that it had worked and was busily working away.
 
Then my internal wheels (what Poirot calls the "leetle grey cells") began to churn because this didn't, and still doesn't make sense to me. My best hypothesis on why this worked is tied to the fact that sometimes the form of the domain name is equated with the authentication method, in other words, netbios name=NTLM and fully qualified name=Kerberos, and in this scenario the trust between the domains was an external domain trust so Kerberos could not be negotiated. But the loose tie between the form of the domain name is not always true, and this is a very unsatisfactory explanation for this behavior.
 
Perhaps you've got a good idea or even the explanation and you'd care to share it?
 
Anyhow, I'll probably post something more about ADMT in the future. In the meantime, there's a semi-decent write-up at:
that I wrote. Additionally, these pages might have a few other tips not in that doc:
 
All of those documents are based on ADMTv2. The current version of ADMT is version 3.0, and there's 3.1 in beta now. I've used 3.0, and there are a few minor differences I've noticed which are worth writing up. There's also no real help with the admt command lines you might want to use in any of those documents.
UW Infrastructure4/21/2008 2:16 PMBrian Arkills
  

There's so many Sharepoint things I want to blog about, I'm having a hard time writing posts that aren't long and rambling with multiple topics. You'll be amused to know that I drafted a post a couple nights ago that I titled:

"(('Customizing vs. Developing,' OR 'Site pages vs. Application pages') AND 'What does ghosted mean?') OR ('How does Sharepoint render any given page?)'"

I know it's silly to have boolean logic in a title but I couldn't resist.

But that post suffered from this rambling syndrome (which I'm already prone to), so it won't see the light of day. Instead, I'm going to try to break it down into smaller consumable bits.

So ... let's talk about the difference between site pages and application pages.

Site pages are pages which live in the Sharepoint database.

Application pages are pages which live on the server's file system. Application pages usually also execute code, but not always.

In past posts, I've mentioned the fact that most everything in Sharepoint is stored in a sql database, not in the file system. Sharepoint adds to this illusion, by using URLs that make the end user *think* there is a file system behind all the pages they access.

On a well-used Sharepoint server, the majority of pages are site pages. However, on a freshly installed Sharepoint server, most of the pages are application pages. You might have noticed that every Sharepoint site has a handful directories which are identical. They are virtual directories, and each points at the same directory. And in fact, these underlying directories are real--holding actual files on the file system of the Sharepoint server. The actual files in these "real" directories (served via virtual directories on every site) happen to be the .aspx pages that are called 90% of the time any user is interacting with Sharepoint. These files are actually code, grabbing content elements from the Sharepoint database, and rendering a page with that content. For example, if you go to any document library, you'll get the 'AllItems.aspx' page.

That's the same exact page almost every list in Sharepoint calls to display what's in the list.

Why would Sharepoint do this? Well, it turns out that this is one of the reasons why Sharepoint can scale to support ungodly numbers of sites. A majority of page requests are hitting the same hundred pages which are cached in memory, with a bit of content elements getting filled in from the (fast) sql database.

Sharepoint Designer is one tool that can be used to create and edit Sharepoint pages. In fact, it's the primary tool that most people will use. *Every* file that Sharepoint Designer creates or modifies becomes a site page. In other words, Sharepoint Designer can't be used to create or edit a physical file on the Sharepoint server. And as a side note, there is no offline mode for Sharepoint Designer--it only edits live Sharepoint sites (which has some implications you might want to plan around). If you edit a page which happens to have a physical presence, then unbeknownst to you, behind the scenes a copy of that file is stored in the Sharepoint database specific to your Sharepoint site, and the link from your site to that page which usually sends users to the physical file, now sends them to the file in the Sharepoint database. As an example, if you edit the default.master (which you should never do), you aren't editing the default.master in the file system. Instead, you've created a special custom version of the default.master stored in the Sharepoint db which is linked to from your site. In other words, Sharepoint changes the link from the physical file to the virtual file in the database (and makes a complete copy of your edited version to put in the database).

There are ways to create application pages (i.e. pages on the physical file system), but I'll refrain from going into that now. I'll also refrain from delving into master pages.

I will mention that an application page (i.e. it has a physical presence) that has been customized so that it becomes a site page is called "ghosted". This term is an unfortunate one, because it doesn't have any obvious meaning. Microsoft is apparently trying to kill off the ghosted/unghosted terminology, replacing it with customized/uncustomized (which are so much more obvious in meaning), but they've shot themselves in the foot within the Sharepoint framework (the code which Sharepoint developers rely upon) by hardcoding that terminology there which will likely perpetrate the unfortunate terminology.

Sharepoint2/28/2008 3:25 PMNETID\sadm_barkills
  
In this very loaded topic, I'll be walking through the question:
How do you get an email address on your user account in UWWI, or the NETID domain?
and I'll also be touching a few other email address related points along the way.
 
My colleague, James Morris, is responsible for the UW's "edge infrastructure" (among many other things), and he may have a follow-up post in the future to fill in some additional details which complement this post.
 
When most folks at the UW get a UW NetID, they are also eligible for a couple other mail related services. These include the deskmail service (what webpine points at, and the vast majority of UW folks use), and the "UW Email forwarding" service.
 
Outside of the deskmail service, there are many other email services provided both at the UW and elsewhere which UW folks might be using. So for any given UW person, the email address they might be actively using could be almost anything. And to be clear, some people have more than one active email address.
 
In general, there is an assumption that <uwnetid>@u.washington.edu will end up at the person's active email address. However, this is an assumption which may or may not be valid.
 
To accomodate this assumption, UW Technologies runs the "UW Email forwarding" service, which takes email destined for <uwnetid>@u.washington.edu (or @washington.edu) and forwards it along to an email address that the user may set.
 
That data is stored in a special-limited use LDAP directory. In other words, this is not the whitepages directory (directory.washington.edu), nor PDS (eds.washington.edu), nor GDS (groups.u.washington.edu), nor UWWI (netid.washington.edu).
 
This enables UW folks to use something like gmail but maintain the appearance of a UW email account, and also benefit from other UW email services like those provided at the edge (i.e. virus and spam checking).
 
Additionally, there's a way for people to "publish" their email address to others. This published email address information comes from the primary sources of enterprise data, two of which are the Student database (SDB) and the HR system. Assuming the person has allowed it, this published email address is put in both the whitepages directory and PDS. There is a special scenario for people who are both student and employee, but who have chosen to publish only one of their related email addresses--but I won't go into that scenario here.
 
Fortunately, there's a way for folks to edit their information in those systems. For employees you can go to  https://prp.admin.washington.edu/ess/uwnetid/address.aspx
and within the Campus Address section choose to edit the Email field.
 
Note that both of these sources of data are user editable. Users can make mistakes. There is no input validation, nor anyone looking for typos. And there also is no logic validation, so folks might be publishing an email address which is not:
{<uwnetid>@u.washington.edu,<uwnetid>@washington.edu,<valueofUWforwardingEntry>}
This latter scenario is highly relevant to the UW Exchange service, so keep it in mind.
 
That's the landscape and background that UWWI has to work with.
 
As you know if you've read past blog posts, there has been an evolution in the quality and quantity of directory information on user objects in UWWI.
 
At account creation time (and any other kiwi event), we grab relevant directory information from PDS (assuming it's marked as published). However, the mail attribute is not included in the information taken from PDS at account creation (more on this below). We also use Identity Lifecycle Manager (ILM) to synchronize directory information from PDS roughly every 2 hours. However, neither of these actions is a direct 1:1 synchronization. There are a variety of reasons for this including differences in schema between the directories, and differing use cases.
 
For the mail attribute on UWWI user objects, there is a combo of interesting logic, primarily because of the presence of the UW Exchange service within UWWI. The logic each of those events follows is different. The account creation and kiwi events set the mail address to <uwnetid>@u.washington.edu. This logic is limited because it was written early in the service's life, and we knew something better would come along. The ILM logic synchronizes the "published" email address info in PDS (using official business logic if the person is both a student and employee), but only if that user does not have an Exchange mailbox. If the user has an Exchange mailbox, it does nothing.
 
Before I explain that, I want to step back. You'll note that the "published" email address was chosen instead of the "uw forwarding" address or the general assumption of <uwnetid>@u.washington.edu. There are differing opinions about this choice, but when all is said and done, using the published email address permits the widest amount of flexibility--which is why it was chosen. Obviously there are potential problems with this choice, including the possibility of bad data (not true with the general <uwnetid@u.washington.edu>) and the possibility that the value is not actually the address the user is actively using (see my comments above about logic validation), but the rationale behind the choice is good. Bad data or choices can be refined, and user education should improve to help avoid them.
 
Now ... back to our story. Why would we stop synchronizing the mail attribute when a user gets an Exchange mailbox? This is because Exchange links some of it's basic functionality to that attribute. The mail attribute must agree with another Exchange-centric attribute or else there will be Exchange problems for that user. Since the "published" email address has no logic validation upon which we might restrict user choices to agree with Exchange, once a user gets an Exchange mailbox, we must stop paying attention to the published email address for that user.
 
Currently, this means that Exchange users must ask us to manually modify their email address if they desire a change. In the future we may have a solution for this.
 
Also note that some uw netids have no published email address for a variety of possible reasons. In that case, the kiwi account creation code covers this case by setting <uwnetid>@u.washington.edu. And there's no way for those uw netids to change their email address at this time either.
 
That's probably enough on this topic for now. I'm sure I've neglected some portion of this which I'll have to add later.
UW Infrastructure2/26/2008 1:51 PMNETID\sadm_barkills
  
So this week I've been at a Sharepoint Developer class. This is the second in a series offered by Netdesk, i3602. Many of my prior Sharepoint posts came out of the first class I took about a year ago.
 
This class is excellent. The instructor, Michael Cierkowski, wrote the course materials himself based on the MS Press book, Inside Windows Sharepoint Services 3.0, http://www.microsoft.com/mspress/companion/9780735623200.
 
You'll see quite a bit of interesting code samples downloadable on that website, and if you buy the book, you get the explanations, and conceptual underpinnings. The labs in the class are quite good, based on the idea that the instructor expects you to apply what you've learned earlier in the class. In other words, as you go along, you get fewer "step by step" instructions, and more higher level instructions, e.g. "do X--like you did in the last lab but with this slight twist". It's harder, but it makes you learn the material much more solidly than most classes.
 
I'm still getting my head straight about just what I'm learning, and what kinds of things I might blog about out of this class (and I'm only half thru the class too), but right now I envision these posts:
 
  • 'Customizing vs. Developing'
  • 'Site pages vs. Application pages'
  • 'What does ghosted mean?'
  • 'What is a feature? What is a solution?'
  • 'When should I use an application page vs. a webpart vs. a feature?'
  • 'Customized master page vs. themes vs. custom CSS file'
  • 'The challenges of developing on Sharepoint'
  • 'Why should I care about Sharepoint development? (or how to save yourself time and do cool things)
  • 'Webparts'
  • 'More on data sources--when to use the Sharepoint DB vs. when to keep the data external'
Of course, some of the material behind these envisioned posts may not materialize enough to justify a post, so we'll see. And my work time is not exactly without lots of large demands, so it may take awhile to get these out the door, but I figured just the post titles would be interesting.
Sharepoint2/21/2008 8:32 AMNETID\sadm_barkills
  
I've just finished a process of manually merging the two blogs hosted under viewpoint into a single one that can be bundled up for import into the coming sharepoint service.
 
I've re-categorized several posts, and moved files and other content too.
 
Posts from David Zazzo lost their authorship (he's not here to move them).
 
Most comments were moved, but thrown into the post body.
 
The ms-collab blog got the older winauth content, and then when we move to the production service that blog will be renamed to more closely represent the folks doing the actual writing.
We're also pondering a couple other changes, including changing from basic authentication over https to Windows Integrated over http. This is largely because many viewers don't trust the UW certificate authority (CA) which issued the certificate for this site.
 
We originally ended up at basic over https because of a couple reasons, now all gone:
 
  • Some viewers had browser/platform combos which couldn't do Kerberos or NTLMv2, but wanted to post comments or configure alerts (the only things that require login)
  • We didn't want to allow anonymous comments, because we got spam
  • Doing basic auth without ssl breaks UW Computing policy

With the NETID domain allowing NTLMv1, we've moved to a different place since an overwhelming number of browser/platform combos do support that.

Assuming we move forward on this change, we'd likely regain https access (via a new hostname) when we move to the coming Sharepoint service, likely with a Thawte issued certificate.

Sharepoint2/20/2008 3:38 PMNETID\sadm_barkills
  

From a recent post to the Windows HiEd list, a couple folks have asked if I'd publish source for a vbScript that sends mail using an smtp server rather than a local smtp service.

Here's an example that exercises a couple options to give you a message sent with authentication (basic over SSL) on the now deprecated port 465/tcp.  Unfortunately, that's still the only port that most of the Microsoft smtp stack implentations seem to work right with.  For those here at the UW, it does with our central submission service (smtp.washington.edu).

Adding attachments, message content, etc. is all left as exercise for the reader.

smtpserver = wScript.Arguments.Item(0)
mailTo = wScript.Arguments.Item(1)
mailFrom = wScript.Arguments.Item(2)

uname = wScript.Arguments.Item(3)
pw = wScript.Arguments.Item(4)

if smtpserver = "" or emailaddress = "" then
    wScript.Echo "Usage:"
    wScript.echo " testMail.vbs <relayname> <to> <from> <username> <password>"
    wScript.Quit
End if

Dim objCDO
Set objCDO = CreateObject("CDO.Message")

objCDO.Configuration.Fields("http://schemas.microsoft.com/cdo/configuration/smtpserver") = smtpserver
objCDO.Configuration.Fields("http://schemas.microsoft.com/cdo/configuration/sendusing") = 2
objCDO.Configuration.Fields("http://schemas.microsoft.com/cdo/configuration/smtpserverport") = 465
objCDO.Configuration.Fields ("http://schemas.microsoft.com/cdo/configuration/sendusername") = uname
objCDO.Configuration.Fields ("http://schemas.microsoft.com/cdo/configuration/sendpassword") = pw
objCDO.Configuration.Fields ("http://schemas.microsoft.com/cdo/configuration/smtpauthenticate") = 1
objCDO.Configuration.Fields("http://schemas.microsoft.com/cdo/configuration/smtpusessl") = True
objCDO.Configuration.Fields.Update

Set Flds = objCDO.Fields
With Flds
    .Item("urn:schemas:mailheader:Precedence")= "Bulk"
    .Item("urn:schemas:mailheader:X-Test") = "Secure mail from scripts"
    .Update
End With

if err.number <> 0 then
    wScript.Echo "Error prior to Send: " & err.number & vbcrlf & vbtab & err.description
    wScript.Quit
    end If
Err.Clear

With objCDO
    .TO = mailTo
    .From = mailFrom
    .Subject = "Test Message " & now
    .TextBody = "This is a test message."
End With

objCDO.Send

if err.number = 0 then
    wscript.Echo "Message sent successfully"
else
    wScript.Echo "ERR: " & err.number & vbcrlf & vbtab & err.description
End If

Engineering2/15/2008 11:37 AMJames Morris
  

There's quite a few social networking websites out there. You've heard of them, and may even be using them. They give individuals a way to put information about themselves out in front of others, and allow individuals to connect with people that might share a common interest or a needed skill.

However, none of them really gives you a scope which easily integrates with other UW folks, whereas using the coming UW Sharepoint service will give you that.

And this is one of many reasons why moving your local Sharepoint service to the central UW Sharepoint service makes a lot of sense.

If you will, imagine a future where all the UW IT staff have populated their Sharepoint profile with their responsibilities, skills, past projects, and interests. Then, imagine how as an IT staff member you can find other IT staff who share your interest in a technology. Or imagine how in the midst of an issue, when established channels seem to be failing, how you can identify who the staff who have responsibility for a particular service are. Or imagine how IT managers might have a better picture of what their staff are responsible for, and which staff have skills that qualify them for special project work.

I'm sure we can all imagine other scenarios that could leverage this kind of information.

But it's important to note that these scenarios are only possible with data collected from the relevant body of people (and that the data is accurate). The more data collected the better the picture becomes.

Sharepoint2/4/2008 2:37 PMNETID\sadm_barkills
  
So you might have noticed that this blog has been down for a bit.
 
Problem was ipsec services wouldn't start, which keeps the tcp/ip stack from connecting. I've seen this problem once before, but it was on a server that was being decommissioned so I didn't dig into it.
 
Anyhow, the error message produced when trying to start ipsec was:
"Could not start the IPSEC Services service on Local Computer.  Error 2: The system cannot find the file specified."
 
Apparently this is one of several symptoms that can happen when the ipsec local policy gets corrupted or partially deleted.
 
And apparently it isn't uncommon (but certainly it's a rare event) for the local ipsec policy to be mangled or partially deleted after applying a patch or service pack.
 
And in fact, that's how the server hosting this blog got into this state.
 
See http://www.howtonetworking.com/VPN/rebuildipsec.htm for how to rebuild the local ipsec policy state so you can restart ipsec and get your system back online.
Engineering2/4/2008 2:21 PMNETID\sadm_barkills
  
I just wanted to call attention to the fact that during last week the Libraries executed a Windows domain rename. You use the Windows domain rename functionality to change the underlying DNS domain behind your Windows domain. To my knowledge, this is the first UW Windows domain to have ever done so.
 
Windows domain rename is a functionality that has been around for awhile, and client support for it has been baked into Windows since Windows 2000.
 
As you might imagine, such an operation is not for the feint of heart, and can involve many hidden perils for those who don't take enough time to carefully plan their path. Even for those that do take adequate planning time it can be a losing proposition because identifying and testing every application which might hard-coded or permanently cache that DNS domain can be a larger exercise than the benefit (or even an impossible exercise depending on the size and complexity of your domain).
 
At a really high level, you get the DNS stuff all setup, make a few changes in your Active Directory, and then get all domain members to reboot once or twice, and upon reboot the domain clients pick up the change and adjust themselves accordingly.
 
For more info on Windows domain rename, check out http://technet.microsoft.com/en-us/windowsserver/bb405948.aspx where you'll find links to the tools, an "understanding" whitepaper, and a step-by-step guide.
 
You might also consult with Mike Reynolds and his coworkers in the Libraries ITS to find out how their experience went, and what they did to prepare to make their experience a success.
Engineering1/11/2008 9:01 AMNETID\sadm_barkills
  

As some of you may have heard, David Zazzo will be leaving the UW for a new opportunity with Microsoft Consulting Services - PacWest, i.e. the West Coast corporate sector.

His absence will be felt by many on the MSCA initiative, especially in the near future forging new ground with Exchange and Office Communications Server (OCS), in the Nebula environment, and in supporting much of the core UWWI architecture.

Those who have worked with David closely over the past 6 years will certainly miss his unique sense of humor, and the fresh "let's get thing done" attitude he brings to everything he works on.

This blog (and www.netid) will certainly miss his contributions, from thoughtful posts to layout and well-designed pictures.

Please take a chance to wish him well before he leaves next Friday!

Engineering1/11/2008 8:57 AMNETID\sadm_barkills
  
Can you do that!? Yes, and in fact, the University of Missouri has a working demo of this that I've seen (and in fact, I have 10GB of virtual images of that demo environment). In terms of the coming Sharepoint service offer, it isn't immediately clear today whether we'll be able to offer federated authentication at the time of the initial offer, but I do think it's a feature we need to support.
 
There are two general use cases for using federated authentication with Sharepoint:
a) All your collaborators have a UW NetID, but for some reason their OS and browser can't do Windows Integrated authentication (Kerberos, NTLMv2, or NTLMv1).
b) All your collaborators don't have a UW NetID, but they have credentials elsewhere via either an ADFS or Shibboleth infrastructure.
 
Most of this blog post will focus on b), but let's talk just a bit about a) first.
 
An overwhelming majority of browsers and operating systems do permit Windows Integrated authentication of some flavor, so there isn't a large demand for this, however, it is an option for that small set of clients.
 
In this scenario, you'd permit pubcookie authentication via the UW's Shibboleth IdP. IdP is the terminology for the server which issues the authentication token in this technology.
 
For either a) or b) you need to have a second web application which has another authentication provider than Windows Integrated. That other web application can point at the
same site content, but because of the security details underneath both federation technologies, you are required to have a different URL if you have access to any given site
via both Windows Integrated and a federated authentication provider. Some of this is noted here, in this ADFS and MOSS configuration link, http://technet2.microsoft.com/Office/en-us/library/61799f9a-da01-4c11-b930-52e5114324451033.mspx?mfr=true
 
For either a) or b) there are side-effects/limitations/broken features. For example, the Office document library integration breaks. Which brings me to the second of many external links, Unsupported SharePoint features with ADFS, http://go.microsoft.com/fwlink/?LinkId=58576. And yes, that list is relevant for both a) and b). So generally speaking, if
you can help it, you want to do Windows Integrated authentication. And while I'm back on that topic, I should also mention that those clients which can't do Windows Integrated auth have another option which is do basic auth together with ssl.
 
Moving onto b), we'll get a bit into the architecture, and then later on, I'll talk about how there are actually two possible solutions for a) and b) (which makes talking about this
already complex topic much more complicated).
 
So ... for a general overview of Shibboleth and ADFS, there are a number of good online resources:
Understanding Shibboleth, https://spaces.internet2.edu/display/SHIB/UnderstandingShibboleth
ADFS Design Guide, http://technet2.microsoft.com/windowsserver/en/library/a6635040-3121-47ce-a819-f73c89dafc571033.mspx?mfr=true
Doing the interoperability between ADFS and Shibboleth requires installing the Shibboleth extension for ADFS, https://spaces.internet2.edu/display/SHIB/ShibADFS, on your IdP, and on every IdP you'd like to accept authentication claims from.
 
This link has probably the best information available anywhere about the actual interoperability between ADFS and SHibboleth, Shibboleth and ADFS Interoperability
 
From a quick overview of the architecture components involved, let's talk about an imaginary use case. Say you've got a collaborative group that includes folks from Stanford and the UW.
 
The Stanford folks would go to the Sharepoint site, and the ADFS Web Agent would redirect them to authenticate with their Shibboleth IdP (using their local webauth sso provider) and present that claim to a UW ADFS-R server. That server would take the claims, perform any translations specified in a trust agreement, and issue a new set of claims to present to the Sharepoint server which has the ADFS Web Agent installed. That ADFS Web Agent takes those claims, and translates them to the Sharepoint security model.
 
 
Now the description I've just shared is one of two possible solutions for b) (and to some extent a)). The other solution drops ADFS from the mix. That one goes like this:
 
The Stanford folks would go to the Sharepoint site, and the Shibboleth agent would redirect them to authenticate with their Shibboleth IdP (using their local webauth sso provider) and present that claim to our Shibboleth SP server. That server would take the claims, perform any translations specified in a trust agreement, and issue a new set of claims to present to the Sharepoint server which has the Windows Shibboleth agent on it. The Windows Shibboleth agent takes those claims, and pass them to a custom asp.net membership provider which translates them to the Sharepoint security model.
 
There is a vendor which has something that I think is (or is close to) the custom asp.net membership provider needed. You can read more about it at http://www.9starresearch.com/activesharefs.html.
 
I suspect the solution which includes ADFS is going to be preferable, but that's still to be fully considered.
Sharepoint12/11/2007 3:47 PMNETID\sadm_barkills
  
This is gonna be a lengthy post aimed primarily at Windows administrators. Consider yourself forewarned. But to entice you to the end, I'll also mention that there is a lot of
good, technically useful info in here, including a few details which aren't well discussed anywhere.
 
So there are lots of bad guys out there. And some number of them like to try and brute-force your Windows accounts. And most of us take the sensible precautions by enforcing strong password strength and configuring account lockouts. You can quickly get into a denial of service situation if you aren't careful with the account lockout settings.
 
We use a 5 minute lockout after 150 failed logins during a 5 minute period. This avoids most denial of service situations, while disrupting the brute force attacks very effectively.
 
But during this past year we ran into a situation where these settings didn't avoid a persistent account lockouts. So we had to track down why the account was getting locked out, and where from.
 
First we tried a low-overhead solution of rebuilding the user's computer. Often account lockouts are due to some process on the user's computer having an old password cached,
and endlessly trying to login with that old password. An event on the user's computer seemed to support this theory:
 
Event Type: Warning Event Source: Tcpip Event Category: None Event ID: 4226 Date:  11/20/2006 Time:  4:26:32 PM User:  N/A Computer: blahblah Description: TCP/IP has reached the security limit imposed on the number of concurrent TCP connect attempts.
 
which is logged when something on that computer is trying to establish tcp connections faster than the OS will allow. In more detail, this means that the pool of dynamic tcp ports is exhausted before prior connection attempts are timed out. In other words, something is badly misconfigured. What that something is is not clear; sometimes legitimate applications can cause this, sometimes it's malware. You could run sysinternals tcpview to figure out which app is doing this (or another similar tool). However, you'd need to catch the behavior while it was happening. Instead, we just went ahead and rebuilt the user's computer.
 
But that didn't end the problem.
 
So then eventids 672, 680, 539, and 644 were searched on all our DCs in an effort to locate where this was being perpetrated from.
 
As you know, 539 and 672 events contain source IP info, whereas 644 and 680 events contain source workstation name (netbios) info. The nature of the process causing the endless account lockouts was such that only 644 and 680 events were generated. So we had only a netbios name to work with.
 
Since this computer name was not in our domain/forest nor in WINS or DDNS (and being a university, we have an open network), this was a dead-end.
 
We tried analyzing netstat output from the DCs getting the 680 & 644 events to correlate the open sessions at the time of the failures to obtain an IP address. We further eliminated all connections from known domain computers. This gave us no leads which was incredibly mysterious.
 
We then tried sniffing network traffic at the DCs. This also gave us no leads which again was very mysterious.
 
I then stumbled upon a fairly old Microsoft webcast, http://support.microsoft.com/default.aspx?scid=%2Fservicedesks%2Fwebcasts%2Fen%2Fwc022703%2Fwct022703.asp&SD=GN. In that webcast, Microsoft suggests (but doesn't explain why) that turning on "netlogon logging" would be beneficial in tracking down account lockout events. We tried that, hoping an IP address would be logged there. It wasn't. But it did help.
 
Specifically, the netlogon logs told us where the failures were being "chained" from. We discovered that the audit failures were being chained from a member server to a DC to
the PDC emulator. Which explained why netstat & sniffing at the DC level failed (along with the fact that secure channel communications between domain computers were obscuring
the searchable details in a network sniff like the username). At the member server, we used a network sniff to obtain the IP of the offending computer, and the rest was history.
 
Turning netlogon logging is done by running the following command:
 
"nltest /dbflag:2080ffff"
 
plus a bounce of the netlogon service.
 
The hex string specifies the verbosity. The value above is full verbosity. According the webcast, this at maximum generates 40MB of log--which is nothing. They recommend turning it on for all DCs which is our current practice.
 
The log file this generates is located at c:\windows\debug\netlogon.log.
---
I'd also mention that this resource:
http://www.ultimatewindowssecurity.com/encyclopedia.html
is invaluable. It's so much more complete than Microsoft security event documentation.
Engineering11/19/2007 1:07 PMNETID\sadm_barkills
  
I've come across a variety of Forms oriented content which I think is useful. I've split it into several sections, starting with the more basic stuff first.
 
Basic Forms Information
The Office team has:
On the flip side of that last item, here's a blog entry detailing the infopath functionality lost when using the browser-compatible feature.
 
Doing stuff with the Forms Data
When it comes to using the data submitted via your form, there are many options:
  • Associate your form content type with a custom list, have the submit action be directed at this sharepoint list, and make the relevant input field be the columns of that custom list.
  • Associate your form content type with a list or document library, have the submit action be directed at this sharepoint list, and create a view with the relevant input fields as the columns of that view.
  • Associate your form content type with a list or document library, have the submit action be directed at an email address, and plan to use Outlook 2007 to work with the forms. For more on this, see http://office.microsoft.com/en-us/infopath/HA102068981033.aspx and http://blogs.msdn.com/tudort/archive/2006/02/22/536800.aspx.
  • Associate your form content type with a list or document library, have the submit action be directed at web service, and use the core system.dataset type within that web service to work with the data.
  • Associate your form content type with a list or document library, have the submit action be directed at a database (other than Sharepoint), and manage the data via that database.
So the key configuration point here is it all depends on how you configure the submit action within that InfoPath form. You might recall the example I blogged about where I choose to submit form data to a sharepoint list, however, I might just as easily have chosen to submit that data elsewhere, if it made processing the data easier--or if the privacy of the data necessitated that it be stored specially.
 
On the infopath team blog, there's a good post on data connections from Forms which outlines the database options.
 
For those of you who might like the target to be an Access database (why not move to SQL?), there's another good post on Access specific data connections from Forms.
 
If you do submit the form data to a sharepoint list, you can programmatically retrieve that data and do what you want with it.
 
Advanced Forms Info
For developer-types, there are a couple folks who have figured out how to embed a form within a webpart, so that you might have a form be part of a sharepoint page:
This might be highly useful in the context of a departmental web portal where you want to raise the visibility of a form by embedding it on the front of your department's web page.
 
Earlier I mentioned the option of targetting a web service with your form submissions. Here's more detail on how to access a web service from a browser-enabled form.
 
You can also populate forms with data already in Sharepoint or some other database by using a data connection within the infopath form. Imagine an text input field which would have unnormalized input unless you restricted the possible input to an existing list of normalized values. See here:
http://blogs.msdn.com/infopath/archive/2007/01/15/populating-form-data-from-sharepoint-list-views.aspx.
 
You might also imagine an infopath form which generates calendar events in a shared sharepoint calendar webpart. Imagine a process for reserving a shared resource ...
 
Free Self-Training Resource
Finally, if you'd like to get yourself up to speed on all the InfoPath functionality, see this list of free training labs live on MSDN.
Sharepoint10/8/2007 2:40 PMNETID\sadm_barkills
  
So back in June, James Morris & I blogged a little bit about some exciting group policy stuff we had seen at TechEd. I'm back with a bit more details, because I think this has very relevant impact on UWWI, and well ... it's just cool!
 
So awhile back, Microsoft bought a company called DesktopStandard, who made a variety of 3rd party group policy tools and add-ons. They had about 3000 custom group policy settings which will someday soon will be added to the existing set (that's twice what you get with the default w2k3 plus vista group policies). Microsoft did some work to eliminate any tattooing issues with these group policies, so they aren't quite so custom anymore. I'll give some sample categories of new policies below.
 
Additionally, Microsoft will be integrating a group policy feature from DesktopStandard called filters. This is kind of like the existing ACL-based filters, but so much more extensive that it's crazy. I'll give some sample categories of the filters below.
 
Microsoft has already taken one DesktopStandard group policy technology and rolled it out. The Advanced Group Policy Management which is part of the Desktop Management Pack is that offering. With it you can implement change management on your group policy, and easily see differentials between two group policies (making it easy to see what's changed on an edited group policy that's being proposed to be released).
 
Now ... some more details. Some categories of the new group policies coming from this are:
 
power management
scheduled tasks
applications
drive maps
environment
shortcuts
files
folders
ini files
registry
 
We were told this set would eliminate the need for login scripts, which is one of the issues we need to solve in the Delegated OUs project. I have to wonder if this set plus the filters might also give us a way around the home directory issue we need to solve.
 
Speaking of which, here's a list of categories on those filters:
 
battery present
computer name
cpu speed
dial-up connection
disk space
domain
environment variable
file match
filter group
ip address range
language
ldap query
MAC address range
message box
MSI query
Operating system
organizational unit
pcmcia present
portable computer
RAM
Recur every
registry match
security group
site
terminal session
time range
user
WMI query
processing mode
 
So as an example, we saw a presenter deploy a shortcut to his desktop via group policy. He then edited that group policy and added filters which made the setting only apply to Windows 2000 computers, users in the domain admin group, and running in a Terminal Services session. His demo desktop, of course, didn't meet that stringent set of filters, and so the shortcut he had previously deployed disappeared (showing us that tattoing was no longer an issue for these new settings). The demo worked quite well and quickly.
 
We heard rumors that this set of new stuff would come concurrently with the WS2008 release, and it's one of the Microsoft technologies I'm really looking forward to seeing more on.
Engineering9/14/2007 1:48 PMNETID\sadm_barkills
  
In the Basic Sharepoint List Features post, I mentioned alerts, and how you can setup email-based alerts on any sharepoint list. This post looks a little deeper at this functionality, and explains how you might go about setting up an alert yourself.
 
Yep, you read that right, any end-user can setup the alert themselves without any administrator involvement.
 
Now, I'll confess I've got an ulterior motive here. When we first setup these Engineering blogs, we setup an alert to a mailing list. And apparently, now folks want to get on that mailing list in order to get alerts from these blogs. And as you'll soon see, that's not really required--anyone can setup their own alerts without needing to be on a mailing list.
 
We'll also be looking at Sharepoint email integration on a wider scale than just alerts. Contrary to what you might have been told previously (and the marketing material is definitely misleading), there is really no reliance on Exchange for Sharepoint email integration. However, there are Exchange-Sharepoints integration points. We'll get to this larger integration story after looking at alerts first.
 
So setting up alerts can seem a bit tricky because the interface to do so varies based on the site/page/list you are currently at. For Document Libraries, you'll find an Actions menu with an Alert Me option. But for most other Sharepoint locations, you won't find an action menu. Regardless of where you are, you should find the authentication (or profile) menu in the upper right corner. It'll say something like "Welcome NETID\barkills" or "Welcome Brian Arkills" or some variant. If you haven't authenticated, it'll say "Sign In", and you'll need to login first before you get a drop-down menu. If you click on it, what menu options you see depends on your level of permissions for the site. If the site has permitted anonymous access, and you do not have at least read permission on the site, then you'll only see two options:
  • Sign in as a Different User
  • Sign Out
If, however, you have at least read permission on the site, then you'll see more menu options:
  • My Settings
  • Sign in as a Different User
  • Sign Out
And if you happen to be a site owner, you'll see even more menu options:
  • My Settings
  • Sign in as a Different User
  • Sign Out
  • Personalize this Page
The menu option you need in order to setup alerts is the 'My Settings' one. So in some cases, you may need to request read permissions.
 
Within the My Settings page, you'll see a portion of what is known as your Sharepoint profile. Assuming you are using your NETID user account, you should see an email address listed which is your official @u.washington.edu forwarding email address. There should be a bar with a link entitled 'My Alerts' on this page. That will take you to the 'My Alerts on this Site' page. From there, you can choose the 'Add Alert' button/link. This will take you to a page where you can choose which sharepoint list you want to track. For this blog, you'd probably want to track the Posts list. After making a selection, and clicking Next, you'll see a page where you can configure the details of your alert.
 
You can change when alerts are sent, with options of:
  • all changes
  • new items are added
  • existing items are modified
  • items are deleted
And you can additionally filter on these conditions:
  • anything changes
  • someone else changes an item
  • someone else changes an item created by me
  • someone else changes an item last modified by me
  • somone changes an item that appears in the following view
And finally, you can choose the frequency of email alerts, with the following options:
  • send email immediately
  • send a daily summary (specify time)
  • send a weekly summary (specify day & time)
Now, let's move from the end-user perspective to the Administrator perspective.
 
To enable alerts on a sharepoint site, you first need to go into the 'Central admin' site for the sharepoint farm. From the Application Management page, you need to configure the Web Application Outgoing Email Settings. Each web application can have a different set of settings, where what is required/configurable is:
  • Outbound SMTP server
  • From address
  • Reply-to address
  • Character set
There is no control of the format of the email alert sent, and from I'm told the format is an attached HTML format.
 
You can also email-enable sharepoint lists, i.e. inbound email integration. This permits someone to send an email to a sharepoint list with content for that list. To use this functionality, you are *required* to install the Microsoft SMTP Server Service. No option to use Exchange for receiving & processing the email. After installing that component, you'll need to visit the Central Admin > Operations > Topology and Services category > Incoming Email Settings page to configure the farm to use that SMTP service.
 
After setting the SMTP service up, on the "settings" page for any given list, you should see an additional option under the Communications section, called Incoming Email Settings. You'll need to give it an email address. The email address has to route to your SMTP service.
 
If you also want that email address to show up in the Exchange GAL, then you need to get it setup as a contact in Active Directory. There is an option in the central admin configuration to automatically setup these Sharepoint-centric contacts in AD. However, it isn't clear yet whether there are implications which will prohibit doing this in the NETID domain. It's possible this could create a situation where a UW NetID account name collision would occur. There's some further investigation needed on our end here.
 
Finally, I should mention a few other Exchange integration options. Of course, there's the offline functionality that Outlook provides. There are several Sharepoint webparts which provide for integration with Exchange-based data. There's a Exchange OWA webpart, which would allow you to have your Exchange OWA experience within the context of a sharepoint page, subject to whatever configurations you apply to it. There is also an Exchange Calendar webpart, to expose Exchange calendar data on a sharepoint page. Within Outlook 2007, you can access a sharepoint-based calendar, and copy calendar events between any Exchange calendar you have access to and the Sharepoint calendar using an easy drag and drop action.
Sharepoint8/24/2007 10:51 AMNETID\sadm_barkills
  

Come visit the blog site to see a couple cool custom webparts.

Once there, you should see two different custom webparts with a snowflake theme:

-You should see a webpart which shuffles through a sharepoint picture library, which in turn is dynamically fed by picture content hosted by snowcrystals.com. There are about 50 unique snowflakes that cycle through.

-You should also see a webpart which displays snowflakes (not unique) down your browser screen, similar to what I think amazon did with their website around the Winter holiday season.

Both these custom webparts rely on AJAX, meaning they take advantage of javascript on your local computer/browser. I believe both work on either IE7 or Firefox.

Thought folks might enjoy a bit of fun that also demonstrates some of the coolness available via sharepoint. :)

Oh ... and no, I didn't write these webparts, nor the custom picture gallery template which does the dynamic picture download over the web. That credit goes to Todd Bleeker over at Mindsharp.

Total elapsed time to put all this together was less than an hour (due to my bumbling, and learning new things along the way), and I duplicated it in less than 5 minutes on a second attempt on a completely different sharepoint server.

Sharepoint8/23/2007 10:11 AMNETID\sadm_barkills
  
Here's a bunch of Sharepoint-related resource links for your reading list. Enjoy!
 
Sharepoint8/16/2007 11:51 AMNETID\sadm_barkills
  

So in the tools post, I talked a bit about what's individually possible with a InfoPath form and workflows within Sharepoint. In this post, I'd like to look at the combination of using both.

A couple weeks ago, I spent about an hour creating a form and workflow and publishing them on a personal site to get a sense for the experience.

For this demo, I tried to recreate C&C's Petty Cash reimbursement form and approval process. This form currently is a PDF form which you fill out online, print out, and mail to C&C's Business office, along with a printout of an approval email from one of eight people within C&C. In other words, the existing process is a mix of high-tech and low-tech. I was aiming for a completely high-tech electronic-based process. I'm sure there are additional pieces of the C&C process for this kind of thing that I neglected--I was simply trying for a concrete proof-of-concept. So as I write about this topic, I'll add concrete details based on that experience, and I've taken a few screenshots to give you an idea of what things look like.

As noted in that prior post, Infopath is both a forms creation tool and a tool for filling out forms, but you can browser-enable infopath forms so clients can use their browser to render and fill out the form. This feature requires Infopath 2007 and the Forms Server component which comes as part of the MOSS enterprise version (you can also get Forms Server as a standalone component).

Infopath has some sample forms you can use as templates. And so I initially tried starting with the 'expense report' sample and customizing it. However, I found that there were assumptions nested within the sample that broke as I customized the form. And initially I didn't know enough to be able to remove all the assumptions to fix the things I was breaking. So I ended up starting with a blank form, however, that didn't seem to increase the difficulty level. Designing the form was rather simple, with a design task bar that holds all the basic options you'll likely want. I inserted a table layout, added text box controls, date pickers, a repeating table (for adding an infinite number of itemized expenses), and a button control. I formatted the various text boxes appropriately (some needed to be a decimal data type with a currency formatting), and for some text boxes I put in a default value. On the C&C form, the requestor can be someone other than the claimant, so I made the default value for the requestor's info be the values from the claimant text boxes. So for the general case where the claimant and requestor are the same, the end user only needs to supply that information within the form once. For the phone textbox, I added a data validation to ensure that input looked like a valid phone number. I did something similar for data validation with the email textbox. For the data textbox, I created a rule. This rule always fires, and has a function which sets today's date. So that field gets auto-filled out. The experience for creating the rules was similar to that within excel using functions and formulas, i.e. very simple. I could have set the default value as today's date instead, but I decided I wanted to see how rules worked. Within the repeating table of itemized expenses, I created an total expenses textbox which didn't accept any user input. Instead it sums all the individual expenses. Again this was done using an excel-like formula experience. I also had a textbox for the amount to be reimbursed which defaults to that total expenses value. But the user can override that default value, because they might not be entitled to claim reimbursement for the full cost of some of the items. Finally, I titled the button "submit". Prior to publishing the form to Sharepoint, I was unable to actually configure the sharepoint-specific actions associated with the button.

Here's what the form looked like within InfoPath after I was done:

Infopath Form

Once you've got a form created, Infopath allows you to save and/or publish your form. You can save it to a file location or to a sharepoint document library. However, the best option is to publish the form to a sharepoint site collection as a new content type. You specify the site collection as the destination for publishing, then choose to create a new content type for your form. The reason this is preferred is three-fold:

  • the form doesn't show up as an editable item within a document library, and
  • the form can be used on any list logically under that site collection--so it's more widely usable
  • the end-user experience for starting to fill out a new form makes more sense via this method--from the new menu of the list the user chooses to create a new item of that form content type, instead of opening the saved infopath document and saving it as a new file within the document library

One of the important pieces in the publishing step is to enable the form to be browser-enabled.

So once you've published the form as a new content type, you now enable the content type for the sharepoint list within that site collection where you want the forms to be filled out from. That includes the following two steps:

  • From the list > Settings Menu > List Settings > General Settings column > Advanced Settings > Allow management of content types=Yes
  • From the list > Settings Menu > List Settings > Content Types section > Add from existing site content types > select and add new content type

So at this time, the new form content type you've added should have resulted in an item on the new menu, like so:

Content Type in New Menu

Once the form has been published to Sharepoint, back in Infopath you can now configure the button action to be submit, and to submit to the sharepoint server. And of course, you need to republish the form again, overwriting the prior version (i.e. the prior content type).

There are a couple gotchas here on the sharepoint list which are probably worth mentioning before moving on to workflow. First, each list needs to be configured to allow browser-enabled documents--this isn't the default setting. This is the server-side piece of enabling that infopath form to be rendered within a browser. That's at:

From the list > Settings Menu > List Settings > General Settings column > Advanced Settings > Opening browser-enabled documents=display as web page

And now that we've done that, you'll be happy to see below my form rendered in a browser--and not in IE even, in Firefox:

Browser-enabled Form Rendered in Firefox

Another gotcha is ensuring that your end users have permissions to see and create new items in the list. This may or may not be an issue depending on where the list is in the overall sharepoint site hierarchy.

Moving on to Workflow ...

So WSS comes out of the box with a single workflow, Three-State, while MOSS comes with:

  • Collect Signatures
  • Disposition Approval
  • Routing
  • Three-State
  • Routing
  • Approval
  • Translation Management

You may not see all of these workflow options depending on what features have been enabled in the sharepoint farm, and parent site collection.

I chose the Approval workflow, although I suspect a custom workflow might more closely mimic the existing process. Basically, any given C&C person would want to target their director as the initial approver before sending the form along to the C&C Business office. The default Approval workflow only allows you to target a static list of approvers, so that's not an exact match, but it's what I went with. I'll probably investigate custom workflows in the near future.

Anyhow, adding the basic workflows is rather simple (we'll get to the steps shortly), but you do have a choice of where to configure the workflows. Assuming you've added your form as a content type, then you can choose to associate the workflow with the content type. This means that all instances of the form across your site collection will follow that workflow. Alternatively, you can choose to associate a workflow at the list level. I'm going to talk about the latter here, but the former is probably preferred. And you can do both (or have multiple workflows at either level), if there's reason to have multiple workflows.

For list-based workflows, all the relevant steps are at:
From the list > Settings Menu > List Settings > Permissions and Management column > Workflow Settings > Add a workflow

One of the key pieces here is specifying the task list. Workflows usually generate tasks. These tasks are assigned to the appropriate parties, and usually the workflow does not progress until those parties change the status of their assigned task. So you need to be cognizant of what task list you target your workflow at, and communicate that information to the expected approvers. This is a common gotcha--in the context of this form-based workflow, the approvers will get an email with a link to the form, but the form itself has no approve or reject button. There may be some way to actually add approve/reject buttons that perform that task action, but it's not a default option. In any event, I'd suggest you include the URL for the task list in the email notification message for the workflow.

Out-of-the-box Workflows can be manually started or automatically started based on item creation/modification. You can configure parallel tasks or serially dependent tasks. You can configure due dates for the tasks, permit the approver to be changed by the initially assigned person, and add a cc for notifying other interested parties.
All in all, the out of the box workflows are very easy to setup. I don't have a sense yet for whether they have a really wide usefulness, or if custom workflows are going to be much more commonly needed.

I'm still learning about what can be done, but the possibilities are very tantalizing, and I figured this post would be widely useful at exposing the kinds of things that are possible.

Sharepoint8/15/2007 9:40 AMNETID\sadm_barkills
  

Much of this content has previously been posted on the sharepoint_tech list, however, additional content has been added.

-----

There are a variety of tools that can be used to work with sharepoint sites and pages. These tools range from easy to use (with limitations inherent in a simple UI) to very hard to use (with lots of potential). This post introduces the standard options that are available today.

The simplest tool you can use with Sharepoint to create sites and modify them is the browser-based UI--or put another way, no tool required. Via the browser, you can create and delete sites. You can modify sites/pages, modify content within sites/pages, and add new functionality--which in the Sharepoint terminology equates to adding webparts. Adding webparts via the browser UI is incredibly easy, requiring only a couple of clicks, plus any configuration required depending on which of the many webparts you choose. Using the browser UI you can configure workflows, add columns, configure permissions, grant access to the site, configure email-based alerts, and on and on. The browser-based UI is so rich, that the overwhelming majority of Sharepoint users will only ever use this. Because the browser UI is so dominant, it's easy to mistakenly think Sharepoint has the limitations you experience in that browser UI. It's a very rich experience, but it doesn't really extend to the html level of editing. So you can "direct" but not really "design". Which is a nice segue ...

Sharepoint Designer 2007 is the replacement for Frontpage 2003. You can use it with sharepoint or with any website (i.e a non-sharepoint website). However, Designer does have a bunch of functionality specific to Sharepoint. Designer is entry-level tool needed to create master pages, site templates, and custom workflow. You can use Designer to associate a new master page with any page, and create new content regions. Designer can also be used to do most of the things the browser UI allows you to do including editing content, adding webparts, etc. Designer has a very busy UI, with menus, toolbars, floating window panes, and tabs within panes. When following step-by-step directions it can be easy to get lost, as many terms you might use for navigation are used in several places within the UI. As with most computing technology, there is plenty of web and sharepoint terminology "baked" into the experience. Fortunately, most of the folks who are going to need to use Designer will be used to this additional level of complexity. But for those folks who aren't web developers, you may want to provide some level of orientation.

Microsoft also has the Expressions suite for website design/development which is more of a head to head competition with Dreamweaver. But there is no functionality specific to Sharepoint within that tool suite.

Infopath is the tool to create forms (which can be rendered within any browser). Infopath is both a form creation tool and a tool for filling out forms. So you can get confused as to whether you are filling out the form or in design mode. Prior to Infopath 2007, you had to have infopath in order to fill out an infopath form. This limited the audience to clients on the Windows platform. However, with infopath 2007 you can now choose to browser-enable any infopath form. This allows the form to be rendered and filled out in a browser. Infopath has a rich ability to handle user input and work with it. You can include default values, perform function-based manipulations of data (e.g. sum numbers, do string manipulation, etc.), perform data validation (including regex like pattern matching), apply formatting (e.g. currency), create rules which fire off actions based on conditions, and more. If you have MOSS enterprise version you get the Forms Server component as part of that which allows you to collect and work with submitted forms.

And Visual Studio has always been able to work with websites, Sharepoint or IIS or other. Visual Studio provides access to the full set of asp.net framework. In addition, the entire Sharepoint platform has a .net framework behind it (microsoft.sharepoint). So for example, you can create Sharepoint sites from .net code. Or you can extend the existing sharepoint browser UI to expose functionality that's available but that the sharepoint project team didn't expose from the default out of the box toolset (this kind of extension is called a "Sharepoint feature"). Visual Studio is required for highly customized workflows, creating your own webparts, and highly customized data control scenarios.

You might also use something like powershell and the sharepoint framework to script common things like site creation. I've been told that the Sharepoint framework is not remote-capable, meaning that calls using it must be run from the actual sharepoint server, but I haven't been able to verify this yet.

Sharepoint8/14/2007 9:35 AMNETID\sadm_barkills
  

This content was original posted on the sharepoint_tech list, and has been slightly edited for repost here. There is some new content in the asp.net bullet (last bullet).

---

Developing applications within sharepoint is largely dependent on getting access to data which isn't natively within the sharepoint DB to begin with. But accessing data from within Sharepoint isn't just for web developers anymore. End users can easily make use of external data from within Sharepoint. We'll explore that later.

Getting access to external data from within Sharepoint is supported via a variety of options, including:

  • Manual import into a sharepoint list. This method assumes that sharepoint will become the authoritative source of this data (unless you manually synchronize). Excel import, Access import, and several other methods can accomplish this. This is a very end user accessible method, but may not be ideal if the data already exists in a sql database. Basically this is good for smaller amounts of data and for adhoc cases.
  • Business Data Catalog (BDC). The BDC is a collection of external data source connections, with definitions of the type of data within those external sources. These connections are defined centrally (this is one of the shared services that many farms can reuse), and then available to every Sharepoint site. By "pre-defining" the data connections once, the external data is widely useful to many people. The BDC allows end users to easily incorporate data into their sharepoint site without requiring those end users to know anything about the external data connection, or even the query language for that data source. Farm administrators setup the BDC connections, but end users get the benefit. This is good for data which has wide usefulness. External sources defined in the BDC can optionally be crawled by the Sharepoint search engine.
  • Define database connections within Sharepoint Designer. These connections can be leveraged with views, tables, and other data controls on that single webpage. This approach requires some technical knowledge of the database, the connection specifics, and the relevant query statement (usually tsql). This might be a common approach for web applications by web developers. This approach doesn't assume any special privileges with respect to the Sharepoint server (more on this below).
  • Define asp.net database connections with Visual Studio. The asp.net framework has been commonly available for quite awhile to create connections to external data sources. So it's no surprise that it can also be used within Sharepoint along with asp.net data controls (e.g. gridview) in highly customized ways. This approach requires developer-level knowledge, and would only be required for web applications with very unique requirements. This approach does assume some special privileges with respect to the Sharepoint server. Specifically, this approach requires code-behind, and all pages with code-behind fall into a special category in terms of how they are provisioned. This is because these pages don't actually live within the Sharepoint DB, but instead live on the file system itself. On a shared Sharepoint server, there may be change management procedures behind deploying these pages.

Obviously there are many points of accessibility here with  options for users of varying sophistication. The bottom line is that Sharepoint makes it very easy to get at external data and use within a web context.

Sharepoint8/14/2007 7:26 AMNETID\sadm_barkills
  

Some of this content has been previously posted on the sharepoint_tech list, but much of it hasn't.

---

Everything within Sharepoint is either a list or an item in a list. In some cases, you'll see a different name, e.g. document library or posts or alerts, but you'll notice that all of these are simply lists with a special name.

Because everything in Sharepoint is a list, knowing the common set of list functionality is important. And that's what this post is all about. Knowing this information will provide some basic understanding which you can build upon. Take note that all of this functionality is available in both WSS (the version of sharepoint which comes free with the server OS) and MOSS.

Every list has:

  • "Permitted" content types
  • Columns (or site columns)
  • Views
  • Permissions
  • Workflow (optional)
  • Versioning (optional)

Content Types

Content types in particular are interesting, providing the basis for a managed experience. So for example, with a custom content type, you might:

  • Associate a word template with a document libary so anyone else can re-use your template to create new documents of that type
  • Associate workflows with a custom content type
  • Associate information management policy with a custom content type, e.g. define a retention policy for items of that type
  • Have managed metadata from within sharepoint. With managed metadata you might enable better search relevance, have a better ability to manage items within the list, plug metadata values into content regions within a template-like content type, and so on
  • Re-use defined content types across sharepoint sites

Think of content types as the schema of your lists. They define what types of data items are possible in any given list. There are many default content types which are generally useful (about 20 by default). For example, all Office documents are the document Sharepoint content type. You can build new content types from existing content types. When you've enabled a content type on a list, the result shows up for users in the New menu as a new type of item they can create.

Columns

Each item in a list can have many pieces of data associated with it. These pieces of data are called columns, and you can add additional columns to any list.  Those additional columns can represent whatever data you'd like to capture.

For any column in a sharepoint list, you can "normalize" the values for a column--meaning require the values to follow a set of known/controlled/approved values--where normalize means the values must be from a list of values. This requires that the approved values are either in another Sharepoint list within that sharepoint site or are in a connected external data source. To do this, when you add a column you use the lookup type to link the values to the other sharepoint list. This results in a dropdown menu for the new column for each item in the list. This kind of functionality makes categorizing things in lists easy w/o any need for coding.

Views

Each list has some default views defined, e.g. the 'All items' view which displays all items in the list. Alternatively, you can define custom views which target any of the properties of items, including content types and columns.

Permissions

Each list has permissions. If someone has no ability to read a list, they won't see the list at all from parent sites. Sharepoint 2007 also supports per-item permissions, so for example, you can permit something at the list level, but not permit it for a specific item.

Workflow

With workflow you can create actions which happen when items are created, deleted, changed, etc. Out of the box workflows include: Approval, Collect Feedback, Collect Signatures, Three State. The out-of-the-box workflows are pretty useful, but you can also create custom workflows (with Sharepoint Designer) without a lot of hassle.

Versioning

Versioning provides content management. Changes to items are incremented as minor versions. List content owners control major versions, and only major versions are "published" for general viewing. This feature is much more extensive, so we'll leave the description limited for now, and perhaps come back to it in another post.

Other List Features

Every list within sharepoint can be "exposed" via a webpart from another page. This respects the original list's security, but allows you to manage the user experience for that list. By exposing a list via a webpart you can remove portions of functionality (say the ability to edit items), use a view to change what is exposed, target audiences (meaning that the webpart only is displayed for certain viewers), etc. Dashboards, personal sites, the 'all site content' page are examples of this 'exposure via a webpart' functionality.

This webpart exposure makes it very easy to design a portal like experience with Sharepoint. End users often comment that they didn't realize that working with web parts would be so easy. It's not much more complicated than click and drag.

The Action Menu

Every sharepoint list has an action menu which exposes list functionality. The following are standard list actions:

  • Export to Spreadsheet. Put the list into an Excel Spreadsheet, opening Excel on the client computer.
  • Open in Access. Put the list in an Access database.
  • View RSS feed. See the raw RSS feed. Every list is RSS-enabled by default, allowing you to change the flow of web-oriented data from a pull-based model to a push-based model. This is a very significant change, especially for collaborative lists with critical information.
  • Alert Me. This allows you request an email when things change. There are some configuration options here. Again, this enables you to turn the information flow into a push-based model.

Special List-Specific Actions

Some lists have special actions available to them. For example, a Document Library has the 'Connect to Outlook' option, which allows you to take documents offline for viewing and editing.

Another interesting specific action for a Document Library is the 'send to' action. On any item you can right-click and choose to 'send to other location', specifying another Sharepoint site (doesn't have to be on same server). This creates a "live link" in that other sharepoint site. This option can/should be used to give read-only access on another site. "Live link" means that the source can optionally notify the other sites of updates to the source doc which enables intelligent document re-use.

There are other basic list features and functionality not covered here. Send in your favorite candidate and I'll add it here.

Sharepoint8/14/2007 7:07 AMNETID\sadm_barkills
  

Over the past week, I've been posting like mad to the sharepoint_tech mailman list. If you aren't subscribed, you should consider getting on that list. But don't worry about missed content, because I'm going to re-post all that stuff here (sorry about duplication, but I think it makes sense for the content to be re-accessible). So expect a torrent of sharepoint related info here soon.

With a bit of my time opening up with the advent of the Nebula domain migration put on hold, I've been really digging into understanding Sharepoint 2007. Like everyone else, I'm interested in learning what the buzz is about, but I also bring a focus which is very architecture-centric--the "big picture" if you will. The funny thing about such a "big picture" focus is that it requires both a breadth and depth of technical detail. This is really hard to get with something like Sharepoint, where the feature set is so large. But for the first time, I'm starting to get a sense (dare I call it a vision?) for what the Sharepoint architecture might look like here at the UW.

Fundamentals

So there are several fundamentals you have to understand about Sharepoint :

  • Almost everything in Sharepoint is stored within a SQL database. There are some exceptions and it's the exceptions which often drive architecture.
  • Almost everything in Sharepoint is a list or an object in a list. The exceptions here aren't significant.
  • There are some fundamental, basic building block, kind of objects which define what most users see and experience. Those objects are:
    • Content types. Most everything revolves around content types. If you're like me last week, you probably don't know what they are. Content types are the schema component within the data architecture of Sharepoint. So every "item" within every list in sharepoint has a content type, e.g. document, calender item, contact, etc. A large set of the "global" functionality within Sharepoint is targeted at content types. For example, data retention policies are best targeted at content types. Workflows are best targeted at content types. Reuse of a custom list item on other sites requires that you create a content type. So this is one of the basic building blocks within Sharepoint.
    • Site templates. This defines what master page, webparts, content types are available by default initially for a new site. You can create new site templates, and you can modify a site based on a site template to be completely different from the initial default state.
    • Master pages. These define a template for the pages which are associated with them. They include "content regions" and webparts where you plug in your content. Master pages support styles, aka cascading style sheets (CSS). This is how you'd "brand" the look and feel of web pages.
    • Web parts. Web parts are what define most of the core cool functionality users see within Sharepoint. Webparts actively do something, resulting in content which is displayed on the webpage.
  • Shared Services. There is a subset of Sharepoint functionality which can be shared between Sharepoint farms/servers. The components of Shared Services are:
    • User profiles and Personal sites (My Sites)
    • Search (i.e. search indexes)
    • Audience definition (to facilitate targeting content dependent on membership in an audience)
    • Excel Services (browser based, server processed Excel features. i.e. clients don't need Excel; content authors do)
    • Business Data Catalog (this is hard to explain, and I'll likely blog about it separately)

Big Picture proposal

So getting back to the big picture, I foresee a central instance of MOSS, which primarily provides two Sharepoint features for everyone:

  • Personal sites & profiles
  • Search indexing

Nothing else. I'd call this the UW Sharepoint infrastructure.

For all the other functionality, I foresee hosted instances of Sharepoint which consume the shared services of the UW Sharepoint infrastructure. So that'd mean that departments could bring up a sharepoint server and offer whatever combination of features were appropriate for their departments, while getting the benefit of broader UW search base and centrally-provided personal sites/profiles. So a hybrid approach which maximizes where it most makes sense to do the work.

Additionally, I think a centrally *hosted* instance of MOSS is needed to provide the full complement of features for those departments which would rather not run and administrate their own sharepoint server. In some way, this could be scoped to fit into a cost-recovery sort of arrangement.

Finally, given the Sharepoint licensing implications for sites with a public audience (i.e. anonymous access), I think a centrally hosted instance of WSS is also needed. For example, this public blog might go onto that server. Other examples might include public departmental websites. The cost for us to get a internet connector license on MOSS is slightly more than $7K, so it's definitely cheaper to just buy another server ($3-4K) and put WSS on it.

There's been some talk about taxonomy and governance with respect to Sharepoint here at the UW. There are some good resources out on the web on this topic (I'll post them in subsequent posts). One of the beautiful things about what I'm proposing here is that a large part of the painful process of defining the specifics of taxonomy and governance university-wide are side-stepped. It doesn't all go away, e.g. we still want/need a consistent set of vocabulary that everyone uses and there is still value in defining common roles/responsibilities.

I'd be interested in hearing what folks think of this. Questions welcome. :)

Sharepoint8/10/2007 2:18 PMNETID\sadm_barkills
  
Good evening!
 
I'm pleased to announce that we have finalized (save for some minor typos) the architecture for the first release of the UW Microsoft Exchange service. 
 
This document represents the last several months' effort by C&C along with some assistance from Microsoft Consulting Services.
 
It's been posted to this site for your information and review...
 
Exchange8/1/2007 5:00 PMD. Zazzo
  

While at TechEd, I blogged a bit about the cool features I was seeing in the Sharepoint Search functionality. I've finally managed to find a bit of extra time to write more about that to help bring the exciting details to a wider audience.

Like most search providers, Sharepoint Search crawls content, creates an index based on the results of crawls, and then users search against the index.

Any given Sharepoint site is configured with a single Shared Service Provider (SSP). This SSP determines a few architecturally-oriented configuration settings, including what Search index end users access when searching from that Sharepoint site. So any given Sharepoint server might have many SSPs, and many different search indexes. And in contrast, many Sharepoint sites across many Sharepoint servers could share the same SSP and Search index. So from a Sharepoint architectural design there's a lot of flexibility for which underlying index is used.

OK, so admittedly that's not very exciting. I'll get to the exciting stuff now ...

The HOW

So the effectiveness of any search offering depends on how relevant the results it returns are. Sharepoint has a rich ability to calculate relevance. The

factors it supports are:

  • Title and filename
  • Metadata
  • Density of search term (e.g. 10 mentions in a 2 page doc vs. 10 mentions in a 100 page doc)
  • Keywords (i.e. terms with special meaning to an organization that have special behavior associated with them. This is closely tied to the best bets feature, and additionally you can provide synonyms for keywords that broaden the results returned and the likelihood of the keyword being triggered)
  • Best bets (i.e. results that have been manually tagged as a "best bet")
  • Security (i.e. users only see results they individually have permissions to see)
  • Hyperlink click distance (number of "clicks" from an authoritative site)
  • HTML anchor text (that's the text of hyperlinks)
  • URL depth (how nested within a website directory structure is it?)
  • URL text matching
  • Document Title (office docs only)
  • De-duplication of results (no duplicate results returned)
  • Language of choice (as determined by browser language)
  • Search scopes (definable subsets of all the index)

The WHAT

Another key element in what is returned in a search is what kinds of sources can be crawled. Sharepoint Search supports a diversity of sources:

  • Sharepoint sites
  • SMB (i.e. Windows) file shares
  • Exchange public folders
  • Non-Sharepoint websites
  • Active Directory or any LDAP directory
  • Sharepoint profile databases
  • Databases
  • Web applications

Obviously, there is a ton of value here by being able to search more than just web-based sources.

There are some details under the hood here (which I freely admit I don't fully understand yet) with respect to secure sources that require authentication/authorization. You can specify crawler credentials for each source, but I'm not sure I understand how that security is respected.

The Experience

So Sharepoint Search gives you the ability to move beyond just web searching, and it gives you a bunch of knobs and buttons to help make results more relevant.

Let's look a bit more about one of those knobs. Search scopes are a way to define a more limited set of the index to search against. Assuming you define relevant scopes, this improves the relevance of search results. In an UW enterprise Sharepoint Search offering you might imagine scopes that are targeted to specific kinds of content (via metadata or filename), to specific disciplines (via metadata, sources, or URL location), to specific departmental sources (via source). Nice feature.

So as a user, you have the ability to save searches. And optionally you can configure alerting on those saved searches. Which means the user would be emailed when the search results change (and I'll admit the implications of this feature scare me). You can also optionally choose to save the search as an RSS feed. Both these optional features have the effect of turning search from a pull to a push mechanism, which is very nice.

From within Office, for example Word, you can also issue a search against Sharepoint Search. You right click on a word, choose "Look Up" and assuming you've configured the search providers within Word to point at the Sharepoint search provider, it'll work.

From a Search service provider perspective, there are a number of nice features. For example, Sharepoint provides usage reporting which can help you tune the various factors noted above to make the search service more relevant. Typical canned usage reports that might be helpful are:

  • search result destination pages
  • queries with zero results
  • most-clicked best bets
  • queries with zero best bets
  • queries with low click-through
  • top query origin site collections over the previous X days
Sharepoint8/1/2007 1:26 PMNETID\sadm_barkills
  

Many folks know that the LABS domain was originally built for the Catalyst general campus computings labs. When we ran the winauth project to build UWWI, aka the NETID domain, we closely partnered with Catalyst (which wasn't part of C&C at the time) to ensure they could move to the new solution so we could retire the LABS domain. From a Windows client perspective this wasn't a big deal, however, Catalyst had 100+ Macintoshes which were also authenticating to the LABS domain. The solution they were using was a slight variant on the Apple provided solution for Macs authenticating Active Directory. And before going further, it should be made clear that there are *many* solutions to integrating Macintoshes to Active Directory. At the end of this blog post, I'll include a list of relevant URLs.

If you want full understanding, you should read about the Apple-provided solution. But in the interests of keeping everyone on the same page, the quick and dirty summary is that Apple provides an AD plug-in for Mac OS which you configure. The configuration involves some level of choice where some of the options include choosing which Active Directory attributes to use for certain info.

So after we had a decent understanding of what was currently happening in the LABS domain with Catalyst Macs, we made a few tweaks because in LABS they were using attributes which we foresaw problems with them continuing to use. And it was a good excuse for us to get the C&C uid and publish it within UWWI. These tweaks involved a minor custom schema extension I wrote, and a few extra jobs for the kiwi provisioning component to populate those attributes.

Because of the timing of the UWWI release (too close to fall quarter start), Catalyst didn't actually switch over any Macs to UWWI until this summer. And when we started looking into whether it all worked back in April, we discovered a few problems.

Turns out that the Macs need to query the NETID domain for the relevant user attributes prior to actually authenticating that user. By default, the Macs query AD under the security context of the Macintosh's computer account. Alternatively, you can specify a user account for the Mac to perform that query under. The relevant user attributes are (with parenthetical comments from a colleague at U Texas):

objectCategory (necessary for locating the user object)
objectGUID (necessary or the login session can't be exited)
objectSID (same as the previous)
primaryGroupID (necessary to prevent system from reporting that there is no workgroup upon login)
sAMAccountName (necessary to find the object)

To this Apple-minimum set, for Catalyst we added:

uidNumber
uwCatalystMacHomeDir

And so, one problem we found was that the user doing the query needed permissions to read this set of attributes. That isn't a problem for the apple-provided set, as that set is generally readable by authenticated users, but the two we added are not. So we granted read perms to authenticated users for those two attributes as neither involves privacy info.

Another problem was that the macintoshes don't seem to have global catalog location functionality when the GC they need to get to is outside the Active Directory forest they belong to. There was also some question as to whether the Macs could use NTLMv2 for the LDAP query, or if they had to use Kerberos or NTLMv1. And if you followed that, you'll see this is a big problem. This means that the Macs can't use their own computer account (from the eplt2 domain in the UW forest) to do the pre-user-auth ldap query. This is because the EPLT2 doesn't have a Kerberos trust to NETID (and couldn't because all the domains in their forest haven't made Windows 2003 domain level), and the domains are in different forests. And at the time the NETID domain didn't accept NTLMv1 authentication (it's since been relaxed to accept NTLMv1). This meant that the solution space here was:

a) let Catalyst join those Macs to the NETID domain before we have a general Delegated OUs solution
b) accept the security problems inherent in hard-coding some uw netid password across 100+ Macs and navigate any NTLM issues (and those NTLM issues are now non-existent)

We decided that b) was heinous enough to justify a). And so today there are 100+ Macs in the NETID domain, and it's a somewhat delicious irony that the first client computers in UWWI are Macs. This is, of course, a special limited agreement, and we are definitely not ready for Windows computers in the NETID domain.

I still need to gather up the relevant client settings for the Apple AD plugin from David Cox, so we can publish all of this somewhere more official, but I've had a to-do to get this info into a blog post for too long, and wanted to get this out the door now. :)

Relevant URLs:

Base Apple Macintosh AD integration
http://www.apple.com/itpro/articles/adintegration/

Catalyst Mac Schema
http://www.netid.washington.edu/documentation/catalystMacSchema.aspx

Integrating Macs with AD topical website
http://www.macwindows.com/AD.html

Thursby's AdmitMac
http://www.thursby.com/products/admitmac-eval.html

Centrify DirectControl for Mac OS
http://www.centrify.com/directcontrol/mac_os_x.asp

UW Infrastructure7/30/2007 9:55 AMNETID\sadm_barkills
  
As I mentioned in an earlier post, Ops Manager 2007 includes a new feature called Audit Collection Services (ACS).  At a high-level, ACS sits alongside the OM agents on a managed system and forwards every security event generated on the system to the Audit Collector service running on a management server.  Each event is then processed for alerting rules and stuffed into a database for future use.  Obviously, with the events stored in a database, there's myriads of possibilities of what can be reported on. 
 
For example, if you know you'll need to show who accessed a given file or folder, simply enable auditing for the file or folder and then use ACS' built in reporting to provide a report of the information.  You can grant access to the report to any Windows group.
 
One downside to ACS is it's hardware requirements.  Even smaller deployments can end up requiring some fairly hefty hardware.  This is primarily due to the volume of security events that can be generated from just a few domain controllers, not to mention member servers.  Microsoft's data on hardware sizing is still in it's infancy and should probably be taken with a few grains of salt.  When in doubt, oversize.  It's better than having to migrate your environment to new hardware or adding hardware you didn't budget for.
 
We're in the midst of deploying Ops Mgr to help support the monitoring requirements for the new Roadmap projects like Exchange.  As we get further into the process, I'll post some details of what we're doing.
Engineering6/18/2007 11:10 AMJames Morris
  
There's this annoying feature of group policy where if you can ping a domain controller, then a domain client assumes it can actually get to the group policy. In cases where the domain client is not on a UW network, this can be a serious mistake.
 
Many ISPs block the netbios ports (135/137/139/445) which prohibits the group policy application process. And the timeout on retrying to get that policy is lengthy. So both computer boot and user login can be adversely affected.
 
This is clearly a huge issue for those folks who travel or who take laptops between work and home.
 
Many Windows administrators on campus are already aware of this issue, and have taken steps to eliminate the problem. There are several solutions which work. These include:
 
a) Move the DCs to p172 so that off-campus access is restricted. Couple this with offering a VPN service so that off-campus services are not inhibited.
b) Same as a, but use a firewall instead of p172.
c) Block ICMP traffic from off-campus networks.
 
As we look to making UWWI a resource which more people can rely on, this is clearly an issue we have to address.
 
For this reason, we have plans to implement solution c above, as it is the least impacting and doesn't require a VPN service which we aren't prepared to offer (but take heart that a VPN service is something that is being considered).
 
I should also mention that currently p172 is not routed to Bothell, and seeing as Bothell has a trust to NETID, that would be a breaking change for them. There are plans to get p172 routing to Bothell, but until that happens solution a is not a viable solution.
 
I don't have a date upon which we'll make this change, but it will very likely be in the next month. Comments welcome as always.
 
P.S. Beginning with Vista, this is a non-issue. The changes to group policy eliminate the ping and substantially shorten the timeout period. With the newer OSes you can also set the timeout period.
UW Infrastructure6/14/2007 10:03 AMNETID\sadm_barkills
  
So for awhile now, Scott Barker has been engaging us in a discussion about the NTLM level within UWWI (aka the NETID domain).
 
Scott's point was that by default all the Windows operating systems until Vista have a LMCompatibilityLevel which is not compatible with the setting in the NETID domain. For systems in managed domains, this isn't a problem since you can easily set that LMCompatibilityLevel via group policy. But for computers that don't fit into that profile (i.e. not in a domain or not Windows), this introduces a support problem.
 
While we initially contended that security was more important than ease of use, a couple months ago we changed our mind.
 
What changed our mind was a subtle combination of details, and a desire to be responsive to customer needs. For 95+% of authentication traffic, NTLMv2 session security will be employed regardless of the LMCompatibilityLevel negotiated. This provides an excellent level of on-the-wire encryption, which protects against the well-known exploits of NTLMv1 authentication. Another critical factor was the non-Windows clients. Many of them don't have full support for NTLMv2. And this point is a critical one when you begin to think about deploying a campus Sharepoint service.
 
So we--well Brad Greer in specific--went about applying for an exception to the various UW computing security policies which suggest that we really should be requiring NTLMv2, to relax the authentication level to NTLMv1. This included a review by the C&C's internal Security Infrastructure Team, a review by the CIO, and a review by the PASS council (I don't know what that acronym stands for). I'm happy to report that we have run the gamut of approvals, and are clear to relax the LMCompatibilityLevel to NTLMv1 (i.e. LMCompatibilityLevel=4).
 
We plan to make that change Tuesday June 19th.
 
An email notice will also be going out to the announcements mailing list for UWWI of this coming change. For those who don't know about that mailing list, today you get on it by having requested a trust to UWWI.
 
It's likely that sometime in the future when there is wider support for NTLMv2 by default, we'll look at moving the LMCompatibilityLevel back, but that probably won't be for awhile.
UW Infrastructure6/14/2007 9:38 AMNETID\sadm_barkills
  
In one of his sessions at TechEd, Steve Riley presented a side of the whole DRM debate that I hadn't considered before - protecting a company's confidential data and intellectual property.  Fundamentally, DRM provides just another mechanism by which an ACL can be expressed.  In the case of the recording and movie industries, the user is given the ability to view or listen to the media, but not make copies or share it.  In other industries, users might be given the rights to work with documents that contain sensitive data from their workstation, but not be allowed to copy the data to a removable disk or copy it into an email.
 
Of course, to be truly effective, every application used, from the OS to the word processor to the email client and beyond, must respect the DRM technologies employed.  Otherwise, DRM merely provides an appearance of protection without actually protecting anything.  Ultimately though, you have to trust the user to do the right thing as well.  Users could do lots of things to circumvent a DRM system, from , as one of Steve's slides showed, simply pulling the document up on their computer and taking a picture of the monitor, to memorizing the data or transcribing it onto paper.  Certainly the later can be prevented by banning writing instruments, but is that practical?  Probably not, though it wouldn't surprise me if there are places that do such things.
 
As the UW's own information security policies take wing, we should consider the impact that DRM technologies might make and whether it makes sense to employ them in some areas to protect data classified as "confidential". 
Engineering6/12/2007 7:46 AMJames Morris
  
Over the past couple months, we've become convinced that for many of the services we want to offer to be successful, we need to implement Identify Lifecycle Manager 2007 (ILM).
 
ILM is a combination of what used to be MIIS, Microsoft Identity Integration Server, and CLM, Certificate Lifecycle Manager.
 
Basic ILM
 
With ILM, we can transform the loosely synchronized whitepage data within UWWI into more strongly synchronized data. This will help the Exchange service succeed. And there are plans to make this happen in time for the initial Exchange service release. Hopefully there won't be any snags.
 
Group Synchronization
 
However, there are a host of other places where ILM would come in very handy. We might use it to replace the custom "slurpee" code I wrote last summer to synchronize the Groups Directory Service (GDS) data into UWWI. So course groups, standard GDS groups, and affiliation groups (and person affiliation) might come via ILM in the future. Moving this direction would give us the opportunity to also export groups in UWWI to GDS. It could also give us some opportunities to create auto-groups--groups programatically  formed on the basis of values of a certain attribute. So say all the users who have a specific mailstop attribute value--which admittedly is a pretty lame example. However, as the data we have about users gets more rich, this could be very useful.
 
Future Group Management
 
Microsoft has further plans around ILM and group management in the Identity Lifecycle Manager "2" product scheduled for release summer 2008.
 
These include tight integration with Outlook, a web portal for group provisioning, delegated group management, and workflow built into the entire experience--which was email aware.
 
Fred sends an email to Sally telling her she needs to be in the X group. Sally clicks on the outlook groups menu bar and requests to be in the X group. George gets an email request from Sally to be in the X group. George approves the request via his outlook groups menu bar. Sally gets a request approved email.
 
Alternatively, a web portal is used for the request/approval actions.
 
George designs the workflow for the group via the portal via a simple form process.
 
All based on Microsoft's new workflow framework plus ILM plus AD.
Very fancy, and you'll also notice that it provides a path for non-Microsoft platform based users. I can foresee a future where something like this is used for UWWI group management and the UWWI groups feed back into GDS.
 
Certificate Management
 
Finally, we saw quite a bit about certificate management from ILM.
 
ILM helps solve the smartcard/certificate provisioning problem. You can use ILM to print a bulk set of smartcards.
 
ILM has a nice set of request/approval workflow via a web portal, so that people can request new smartcards, and ask for other smartcard/certificate lifecycle kinds of tasks. For example, being able to remotely unlock a blocked smartcard from a triply failed PIN entry. And it isn't a dumb set of logic either--the process uses identify proofing so that this isn't one of the bad guys you are unlocking a blocked smartcard for.
 
It has a template feature so you can set people up for a sets of certificate uses in a very easy manner.
 
It provides a rich auditing and reporting feature set.
 
It has logic built in for temporary smartcards (user forgets his permanent smartcard at home), 'Recover On Behalf Of', and external CA integration.
 
The existing product does have support for bitlocker or cardspace certificates, but will in the future.
UW Infrastructure6/11/2007 8:01 AMNETID\sadm_barkills
  
Just left a session on this. Very exciting technology.
 
First event forward, so with Vista/Server 2008, there is now the ability to ship events off the box to a remote system. Just like Unix has had a syslog daemon. In this case, from the remote system, you setup a subscription object which defines what to do. That object holds a definition of what events to request, how often to request them, whether to ask that they be pushed instead of pulled, the security context to request them with, and so on. So you might forward all the system events from a server 2008 box go to your administrative workstation. Or all the security events.
 
You might go hog wild with this, having all your Windows boxes (but only those with the newer OSes) forward events to a central collector. And Server 2008 will give us group policy that can control this so you have even less management needed to setup such a scenario. It's certainly a lower-cost alternative than the similar features in SCOM (aka MOM) plus Audit Collection Services (an optional feature of SCOM). But you'd need to build all the logic, and data management stuff behind that to truly replicate what SCOM can do in this area. So this seems like an interesting low-budget option, that is very scalable, but requires quite a bit of custom development glue to get it to a usable state.
 
Fortunately, all the new events are in XML format, and Powershell makes it very easy to work with the events. You can slice and dice them, doing automatic data type conversions, and so forth. So that customization isn't an unreachable goal for only those with more advanced development skills.
 
Moving on to WinRM, or Windows Remote Management, or what Microsoft will now start referring to as Web Service for Management (WS-Man). Sounds like something only web guys would want, huh?
 
Well it's much more interesting than that. What it enables is remote management of computers in connectivity-challenged scenarios. They use port 80 for communication outbound and inbound, but do not actually require IIS or even port 80 be listening on the remotely managed computer. I'm still really fuzzy on how this is accomplished, but there's something about registering with the http.sys component which allows all this. This allows me from my kiosk here at TechEd, which is behind a firewall that only allows port 80/443 outbound, to get to my home vista computer, also behind a firewall which only allow port 80/443 inbound. And I can do all the kinds of things that WMI allows. Which is pretty much anything: reboot, stop/start services, grab events (event forwarding uses WinRM), etc, etc. Security is still respected, and assuming Windows end to end, you get the benefits of NTLMv2 session security encryption.
 
I'm still trying to wrap my head around this development.
Engineering6/8/2007 7:15 AMNETID\sadm_barkills
  
Quite a few sessions at TechEd have been dedicated to a topic that's increasingly getting more attention; both at the UW and around the world in general: securing mobile computers.  It's certainly an active topic within portions of C&C, particularly Nebula and the security groups.
 
As you probably know, certain editions of Windows Vista ships with Microsoft's answer to the desktop encryption problem, BitLocker Disk Encryption or BDE.  Interestingly enough, it didn't start out as Microsoft's answer but it certainly solves most of the issues in the space at least for Windows Vista (or Server 2008) systems.  Microsoft intended BDE to merely protect the OS; the data protection functionality, which many view as perhaps more important, just happened to be an added bonus.
 
Another interesting side-benefit of BDE is system or disk decommissioning.  If you destroy the keys, the disk simply becomes a brick, albeit a brick you could re-partition and reuse as a disk but you'll never get the data off it nor will anyone else.   
 
The original purpose of BDE is only served when the system has at least a TPM v1.2 chip.  The TPM is used to insure the integrity of the system prior to handing execution over to the OS.  If the TPM detects a change in the system, it will halt the boot process and block access to the system.  A recovery process then has to be done to gain access to the system  -- and it ain't easy (more on this in a later post).  Without a TPM, this function is unavailable, but you can still use the volume encryption functionality. 
 
I'll save details and some thoughts about volume encryption for another post.  In the meantime, I'd strongly suggest, particularly for laptops, considering only buying hardware that includes a TPM v1.2 chip. 
Engineering6/7/2007 10:42 AMJames Morris
  
More from TechEd ...
 
So I've finally had a chance to wrap my head fully around what problem spaces Sharepoint solves and what spaces Groove solves. And they are very complementary to each other. And fortunately, they have some nice integration features.
 
If you're like me, you know that both are primarily collaboration technologies, but you might not know much about Groove specifically, as we've had some campus events on Sharepoint but Groove info has been hard to come by.
 
So Sharepoint best addresses the well-connected clients who are in managed domains. It has the nice content management stuff, the portal experience, the exciting workflow features, the feature-rich search, blogs and wikis, and of course, just basic file collaboration. And this is all a web-based solution.
 
Groove isn't web-based. It's fundamentally a peer to peer client application. You install the client, and by default, it installs a service which opens a couple ports and listens.
 
You then setup a workspace, and invite some friends or colleagues to your workspace. Your invitation can go to them in a couple ways, but let's just say it goes via email. They respond, approving your invitation, and finally you get back an email which you must verify to complete the provisioning process of getting them into your workspace.
 
Note I said "provisioning". So in the background of this process there is a set of keys--certificates--being passed back and forth. Each workspace has a cert pair. Each individual/computer has a cert pair. And there's another temporary cert pair too, I think, specific to the provisioning process. Anyhow, Groove handles all those details, and in a secure fashion. No need for a certificate authority, and it's all invisible to the user.
 
At the end, the two of you are now in a collaborative workspace where you can share files and so forth. Note that you don't need to know anything more about your friend/colleague than their email address. No shared domain trust, no user account name, nothing. You don't even need to know where their computer is. They could be in Africa one day, Thailand the next, and that's all hidden by the Groove experience.
 
So how do they talk to each other?
 
Here's what the Groove client does to try to find other computers in it's workspace:
  1. Send local subnet broadcast looking for workspace peers with that listening port.
  2. Send outbound to the Microsoft-run Groove relay server. Ask for workspace peers, and any updates from them. The outbound ports used are the following, in this order:
    1. 2492
    2. 80
    3. 443

So in the background each Groove client checks in with the Microsoft Groove relay server, and this server provides the location resolution, as well as tells each Groove client when there are updates in their workspaces.

Neat, huh?

So getting back to the topic, Groove has built in support for importing Sharepoint document libraries. You simply setup the import, and tell it when to run. You can also export back to Sharepoint document libraries.

This combo allows you to do a very mobile, disconnected style of collaboration, while also posting the results of that more mobile collaboration to a content-controlled Sharepoint site with a broad, well-known and connected set of users.

So now you know how the two collaboration applications aren't in direct competition, and actually are designed to solve different aspects of the same problem. So go Groove. No server required. No UWWI account required. :)

Of course, there are a few wins to running a Groove server. But from what I've been able to tell, those wins aren't significant enough to justify rolling out a UW Groove server. But I'm always open to hearing other arguments.

Sharepoint6/7/2007 9:39 AMNETID\sadm_barkills
  
Following up on Brian's Group Policy posts about one of the tools we heard about for managing Group Policy in ways we might have never dreamed possible. 
 
The tool is PolicyMaker from DesktopStandard, which Microsoft recently acquired.  As Brian mentioned, its exact fate isn't clear to us, though there were lots of hints about it being integrated into Server 2008 perhaps.  That would be extremely cool and here's why:
 
Want to apply a certain set of policy settings, that include say mapping drives, to a computer, but only when the a user is logging in from a Windows 2000 SP4 computer with 512MB RAM, not on the UW network and only if the computer object is in a specific OU and the user is in a specific security group?  Don't know why you would, but you certainly can't do that with the base tools but you can with PolicyMaker.  The possibilities here practically endless.
 
This functionality becomes particularly interesting, if not critical, when we're looking just a bit further down the Roadmap and contemplating how to handle Group Poicy application in the delegated environment.  More to come on this over the next couple months after we had a bit more of chance to wrap our heads around this space and gather some information.
 
One thing we would be interested in hearing more about, is just what you're using Group Policy for today.
Engineering6/7/2007 6:26 AMJames Morris
  
I'm here at TechEd with Brian, gathering a wealth of information on a number of topics and products which I'll finally get around to posting over the next few days. 
 
I attended several sessions over the last couple days on Operations Manager 2007 (OpsMgr).  For those familar with it's predecessor, MOM 2005, you'll be happy to know that Microsoft has essentially completely redesigned the product and added lots of new functionality.  That's good and bad.  It's bad from the architecture perspective, things are way more complicated now, and from the hardware perspective, more hardware is required for even simple deployments; there's more roles and more intensive operations going on that require more hardware and more power.  It's good in ways that are almost too numerous to enumerate.  I'll mention a couple below and add details over the next couple weeks.
 
The biggest new feature, in my mind anyway, is Audit Collection Services.  This was originally going to be a standalone product but was roled into OpsMgr very late in the beta cycle.  For each audit-monitored system, it forwards a copy of every security event to a database for later review and reporting.  Very handy in many compliance situations, not to mention just a good security practice all around.
 
Another new feature, at least I think it's new, is Run As Excecution.  This allows the monitoring system to execute tasks under different credentials at each step in the monitoring process, allowing for synthetic transactions to fully test a given application or service and determine it's true health without having a single account that has way more privilege than would reasonably be sane.
 
Lastly, one of the questions we've had is whether multiple instances of OpsMgr can exist inside a single domain.  This is obviously significant in the context of UWWI and for those that may be running MOM or OpsMgr and planning to migrate into the NETID domain.  I'm happy to report that not only is it possible, but it's also fully supported by Microsoft as well.  However, the new rich delegation/roles functionality might make it possible to provide a central service down the road somewhere.  If anyone would be interested, let us know.
Engineering6/6/2007 1:29 PMJames Morris
  
I'm back with more details. I've included the list of summary bullet items I listed before with more detail so you can find what I mentioned before quickly. I have reordered the items slightly for my own purposes.
 
New GPMC and GPOE

These are the group policy manage console and group policy object editor, of course. Vista/Server 2008 have new versions which have slightly different behavior, features, and support for the new Server 2008 Active Directory group policy features.

The key thing to know here is that old versions of the GPMC/GPOE will not be able to access the newer set of group policy settings, and if you edit a GPO created by the newer versions with a GPMC from an older version than you will incur an "ADM Sysvol Bloat" on that GPO. More on that in the next section ...

There's also some other stuff here, but I'll cover that later in other sections ...

New format for GPOs, called .ADMX

So group policies are a combination of two things--a directory entry in AD plus a set of files in the Sysvol. This new format has nothing to do with the directory entry. It's only about the part in the Sysvol. And that's the part that has the most problems with it today.

The existing .ADM format has awful language support. This is fixed via special ADML file for each language you'd like to have support for each ADMX. The ADMX file is now simply an XML formatted file. The size of these ADMX files is very small, and does not include the full set of default administrative group policy settings. That full set used to be included in EVERY group policy and amounted to a 3.5Mb hit per GPO. That adds up fast to impact sysvol replication traffic. So the story here is that once you create a GPO with the GPMC, don't edit it with a downlevel GPMC. It's not awful if you do--you are just bloating the thing if you do, but you really don't want to if you can help it.

There is a need to convert your old .ADM files to .ADMX format. There's a tool to do that called ADMXMigrator.

Isolated Group Policy service

XP and Server 2003 R2 older Windows OSes run the group policy engine under the userenv process. This is not good. It means that folks can interfere with the group policy engine--either deliberately or accidentally. It also means that group policy logging for these older OSes is stuck in a special userenv.log file which shares log events with other processes that have no relationship with group policy. Ugh. I hate userenv.log. I'm sure you do too. Which leads to ...

MAJOR logging improvements

Vista/Server 2008 have two logs that are specific to group policy. Under the system log, you'll now find basic group policy events under the source "Group Policy". This is basic stuff like "Group policy X applied", "Group Policy Y failed", and such.

Greater detail is available under the application log. Look under Application\Microsoft\GroupPolicy\Operational. There you'll see a great deal of detail.

One of the other improvements in this area is the introduction of something called the ActivityID. This is an ID unique to each run of group policy. So each time a computer processes group policy a new ActivityID is used. This ActivityID can be used to track all the events from a group policy processing cycle. This makes searching for details related to an error you are tracking down much easier.

And in case you are wondering if the introduction of two log files is going to be a problem, don't forget that vista/server 2008 support custom log views so you can marry the two together in a custom view.

Another improvement in this area is the introduction of a new tool, gplogview.exe, which has a very cool "monitor" mode. If you've got a client with an group policy problem. You start this tool in monitor view, redirect it to a file (pipe it), launch a gpupdate, then stop gplogview, and look at the log file. This shows you all the details of what happened in the group policy processing cycle.

Lots of relief here. I also heard about some third party help for older clients. I guess a company called SysProSoft has a PolicyReport tool with will help decode the highly enigmatic userenv.log file. I also heard a bunch of tips in the area of GP troubleshooting from a guy named Jeremy Moskowitz. He's got a new book out: Group Policy: Management, Troubleshooting, and Security. He also has a tip website and newsletter. You can visit him at gpoguy.com and his company's product is at policypak.com.

500+ new GP settings

I got no details here. Just a number. :)

Multiple local group policies

With earlier OSes, you had only a single local group policy. Now you can have many. I'm still not entirely clear on why you'd want a bunch, but from what was said I think it involves delegation scenarios or multi-use computers.

There are some interesting features here involving targeted settings. With these new local group policy objects, you can target single users or members of groups for settings.

I guess this is a good thing when you've got a kiosk where users login, but you want a substantially different configuration depending on who they are.

Network Location Awareness and no ICMP issues

Vista/Server 2008 don't ICMP ping the DCs to determine whether to apply group policy or not. That check is gone. Hooray!

The network location awareness stuff is cool. It allows you to apply certain settings dependent on what network the client is on.

There's another thing here not captured by the title ... vista/server 2008 computer which have gone into sleep will incur a group policy refresh cycle when they wake up, assuming that sleep period was longer than the group policy refresh cycle you've set. Not much you can do about this. But it does explain some of the activity you hear on your vista laptop after waking it up.

Central store

This is kind of a misleading feature name. The new central store is also within the Sysvol area. It's simply a new directory within that area, and it's a new place for the .ADMX files to live. The newer GPMC versions try to use this location first (then they try a local cached copy of .ADMX GPOs, then finally they look at the old .ADM files).

The feature add here is for those custom group policy settings you've got. With a central store (and the .ADMX format), you no longer have to copy the custom ADM file local to every computer on which you ever want to view or edit that group policy object from. And while it's somewhat arcane, that's a huge win in my book.

Search and filtering improvements within the GPMC

Another GPMC feature is support for searching the group policy settings. This allows you to perform a text search of all the group policy settings from with GPMC and GPOE. No more referencing the multiple excel spreadsheets that Microsoft publishes. All that info is within GPMC. Yay!!

You can also easily find the "configured" settings that you've set in an existing GPO. No more hunting through the nested hierarchy within GPOE (or having a side-by-side view so you know the exact path of each configured setting). Very nice.

(Enforced) Change management for Group Policy editing

So this is a feature of AGMP, a part of the desktop management pack, that you need special licensing for. The next several features are also part of that pack.

With this feature, you can designate which group policy objects are "managed". For each managed GPO, you can designate who has authority to edit/check out/check in that GPO. Then that person can check out a local copy of the GPO, make edits, and re-check it in. That person can't apply that GPO to your production environment though. He needs to request that action. The authorizer then applies the GPO assuming there aren't any problems. This kind of functionality may have quite a bit of relevance with UWWI.

Offline Group Policy editing

This is really the same as the above change management feature. And there's also the GPMC feature which provides this too.

GPO difference comparison

Windiff for GPOs.

Extremely rich filtering including *everything* you're currently doing in your login scripts (plus stuff that I know you can't do there)
and
Support for pretty much everything you can't do with Group Policy today, but you wish you could
and
~3000 custom group policy settings

This looked simply awesome. Deploy a scheduled task via group policy. Map a drive letter for members of group X, only when using Terminal Services, when on a computer on this subnet. And so on. The full set of possibilities was not demo'd, but they did show us a set of roll-up objects which looked very comprehensive. James was in that session, and he probably remembers a larger set of stuff that was possible than I do. I'll ask him to weigh in with more examples if he has them.

This feature set looks absolutely killer for the UWWI delegated OU space. It has the possibility to eliminate any need for user property delegation.

Gotchas

On vista/server 2008 only processes running under an administrator context will display Computer settings in the gpresult tool, and only if that admin user has logged in locally on that computer. There is a workaround for this (aside from logging in and elevating): On all your GPOs, you need to delegate to Authenticated Users the 'Read GP resultant data' permission.
-----
If you have not set:

computer/admin templates/system/verbose vs normal status = enable

then when your vista/server 2008 computers boot and are applying group policy and other pre-ctrl-alt-delete stuff, the users simply see a "Please Wait" dialogue. If you have set this setting, then you see a single line of detail on what stuff is actually happening. This is very useful for troubleshooting (and also gives the user the sense that whatever is going on is progressing).

Engineering6/6/2007 12:23 PMNETID\sadm_barkills
  
Another post from Orlando ... :)
 
I've attended two sessions now on the new Group Policy stuff and I think this is the most exciting feature set for the "Longhorn"/Server 2008 release. I should add that all this stuff is already baked into Vista.
 
Here's a summary of features:
  • New GPMC and GPOE
  • New format for GPOs, called .ADMX
  • Isolated Group Policy service
  • 500+ new GP settings
  • Multiple local group policies
  • Network Location Awareness and no ICMP issues
  • Central store
  • MAJOR logging improvements
  • Search and filtering improvements within the GPMC

This set of improvements is awesome, and I'm particularly excited about the better logging, because nothing is worse than troubleshooting group policy today.

But that's not all. There's also the new Advance Group Policy Management from the Desktop Optimization Pack. It provides these notable features:

  • (Enforced) Change management for Group Policy editing
  • Offline Group Policy editing
  • GPO difference comparison

And that's not all. Microsoft recently bought Desktop Standard's PolicyMaker product. Desktop Standard was the leading 3rd party custom group policy vendor. There are plans to integrate it into Server 2008, but Microsoft isn't saying exactly when. We did hear that it'd likely be around the release, but that's inexact.

PolicyMaker has the following notable features:

  • Extremely rich filtering including *everything* you're currently doing in your login scripts (plus stuff that I know you can't do there)
  • Support for pretty much everything you can't do with Group Policy today, but you wish you could
  • ~3000 custom group policy settings

Not a very accurate description, I know. Go to Desktop Standard for more info.

I've got to run off to another session, so I'll be back later with more detail on the summary stuff I've got above.

Engineering6/6/2007 10:42 AMNETID\sadm_barkills
  
Probably best to start this post by giving credit up front to Zephyr McLaughlin within C&C for his hard work moving quickly to help meet the Windows needs that are being uncovered as part of the Nebula domain migration into the NETID domain (aka UWWI). Thanks Zephyr!
 
So most folks don't know much detail about UW NetIDs. I'll start by giving a sketch of the landscape, then move to what's new here. I know Zephyr has plans to get full NetID documentation published, but until then, here's an informal summary from someone close to the source.
 
There are something like 300000 UWNetIDs with active Kerberos service. But this is not all of the UW NetIDs; there are also UW NetIDs without active Kerberos service.
 
Behind the scenes there are several different types of UW NetIDs:
  • The UW NetID most people have is a personal NetID. It belongs to a person for life, and generally personal NetIDs have active Kerberos service for life.
  • Then there are shared NetIDs. These are sometimes called supplemental NetIDs, and can be issued for a variety of reasons. In the future, shared NetIDs will not be eligible for active Kerberos service, but there is a long road ahead before that becomes reality.
  • Reserved NetIDs are by definition ineligible for active Kerberos service. They exist to ensure that certain names are not granted to anyone because those names are in use in some critical technology or service.
  • Temporary NetIDs are given for a fixed period and have active Kerberos service.
While not a type, there is also a special property which can be set on any NetID to indicate that it is used for testing.
 
Most NetID types have a naming rules, but I won't go into that detail here.
 
Until a couple weeks ago these were all the UW NetID types.
 
About 6-8 weeks ago we realized we had some use cases which the existing NetID types didn't quite meet.
 
As we got ready to migrate Nebula into NETID, we found there were a lot of Nebula user accounts which couldn't have active Kerberos service because they were reserved (hundreds of Nebula service accounts).
 
We also realized that the Windows best practice of separating administrative privileged accounts from person accounts that we actively use within Nebula had no complement in the UW NetID system.
 
So both admin and application NetIDs emerged as requirements--and necessary in a very short time period.
 
I'm happy to say that admin NetIDs exist today. I've had one for a little over 2 weeks now, thanks primarily to the hard work Zephyr has provided. And within Nebula, we've begun rolling these new NetIDs out to those computing support staff which already had an existing Nebula admin account. I'll go into greater detail in a minute on admin NetIDs.
 
Application NetIDs are still in process, and telling you where they are at should help you appreciate what's going on behind the scenes.
 
So for a new NetID type to exist, it really needs a unique namespace so that NetIDs of that new type aren't in conflict with personal NetIDs. Additionally, each NetID type needs to have some administrative policy carefully crafted to govern its intended use, and where possible technical solutions need to be implemented to carry out that policy. And, of course, the actual details of implementation need to be defined. The namespace decision needs some level of vetting, and the administrative policy makes a circuit of computing and security folks for approval.
 
All that has happened for admin NetIDs--which like I said, is rather remarkable given the short timespan. Application NetIDs have had a namespace, implementation details, and administrative policy drafted but these are still being vetted for approval.
 
Now ... back to details on admin NetIDs. They exist to segregate risk due to a potential account compromise to a limited pool of risk. They are 3 types of admin NetIDs:
  • Server admin NetIDs are for users with administrator/root level rights or their equivalent on servers. You must have a personal NetID to get one, and the name for a server admin account is: sadm_<personal netid>. So for example, sadm_barkills.
  • Workstation admin NetIDs are for users with administrator/root level rights or their equivalent on a large set of workstations (say > 50). You must have a personal NetID to get one, and the name for a workstation admin account is: wadm_<personal netid>. So for example, wadm_barkills.
  • Enterprise admin NetIDs are for users who manage a large set of user accounts (> 1000 users)--for example, domain admins for large Windows domains. You must have a personal NetID to get one, and the name for a workstation admin account is: eadm_<personal netid>. So for example, eadm_barkills.

This set helps to limit the impact of a compromise of these special user accounts.

As I said earlier, there is an administrative policy for admin NetIDs. Without going into the full details, the high points include that the passwords for these accounts must be changed at least every 120 days, must be at least 14 characters, and can't be the same password as *ANY* other user account you have (*ANYWHERE*). You shouldn't do stupid things with these accounts, like saving the password in an insecure location. And generally they should only be used for interactive administration. You wouldn't run scheduled tasks with these accounts (caching the password in a place where many folks might get it), nor run sql jobs with it, or surf the internet with it. These NetIDs should not be used to login to computers that are outside your immediate control (you never know where a keystroke logger has been installed), but when that becomes necessary you simply reset the password immediately thereafter.

Currently, admin NetIDs are only being issued within C&C because in our haste we haven't fully fleshed out the identity proofing required to get one. However, if you foresee a migration into NETID in your future, and some set of your domain user accounts are currently admin users, you can certainly go ahead and rename them (and change your practices to adopt this naming) now to save yourself the headache later.

And ... that's about all I have on this topic for now. I'll have more in the weeks to come.

UW Infrastructure6/3/2007 6:08 PMNETID\sadm_barkills
  
I'm here in Orlando in the middle of a Sharepoint session break. I'm very excited about the possibilities for Sharepoint here at the UW, and think that of the four early projects we are doing in the roadmap set, it is the one which has the most potential to improve computing at the UW.
 
Let's take just one of the sharepoint features as an example: search.
 
Being able to selectively index specific websites, fileshares, and other repositories along with the sharepoint site (and exclude all the content which has no relevance) has a lot of value. It's the difference between what you get out of a google search (you know, the half million hits you get) and what you want to get out of a search--just the relevant documents.
 
This morning I attended a workflow session. I think this is another of the feature sets in Sharepoint which has an amazing potential to improve processes at the UW. Wouldn't it be great if I could go to a single online workflow to authorize my conference travel experience? That form could then alert the appropriate finanical and managerial staff when their approvals or input was necessary and move the process along without a bunch of administrative overhead.
 
Before my session starts again, I want to quickly mention another exciting feature I read about this morning. Apparently Windows server 2008 (longhorn) has revised the password policy restrictions. With prior AD versions, you could only have a single password policy for an entire domain. With 2008, you have the option of multiple password policies. What's better is that these new policies can be selectively applied based on more complex logic, e.g. members of this Active Directory group get this password policy, but everyone else gets the default password policy.
 
I'm sure there will be more this week ... I owe y'all a post about admin UW NetIDs a new NetID type that we started rolling out last week based on the Nebula domain migration project. So look for that sometime soon.
Engineering6/3/2007 11:29 AMNETID\sadm_barkills
  
As many of you know, we're rapidly approaching the time to renew our Microsoft Campus Agreement.  With the release of Exchange Server 2007, Office 2007, Office SharePoint Server 2007, and the forthcoming release of Office Communications Server 2007 and the System Center family of products, Microsoft has added a new twist to the licensing story: Enter the Enterprise CAL Suite.
 
Traditionally, Microsoft has offered the "Campus Desktop", which includes Windows upgrades, Office Enterprise, and the Core CAL Suite.  The Core CAL suite has traditionally given you licensing for Windows Server, Exchange, and SharePoint.
 
So, what's the difference between the Enterprise CAL suite and the Core CAL suite, and why would I be interested in it?
 
Let's start by comparing the two:
Core CAL Suite
Enterprise CAL Suite
Windows Server CAL
Windows Server CAL
Exchange Server 2007 Standard CAL
Exchange Server 2007 Standard and Enterprise CAL
Office SharePoint Server 2007 Standard CAL
Office SharePoint Server 2007 Standard and Enterprise CAL
System Center Configuration Manager (SMSv4) Configuration Management License (CML)
System Center Configuration Manager (SMSv4) Configuration Management License (CML)
Office Communications Server 2007 Standard and Enterprise CAL
Windows Rights Management Services CAL
System Center Operations Manager Client Operations Management License (OML)
Microsoft Forefront Security Suite
 
"Yeah, so, the Core CAL suite gives me everything I care about?  Why should I shell out the extra money for the Enterprise CAL suite?"
 
Let's look at Exchange Server 2007, for example:  As we continue down the path of deploying Exchange Server 2007 and its feature set, departments wanting to take advantage of the advanced features (like Unified Messaging) when we release 'em will need to ensure that they hold the Exchange Enterprise CAL in addition to the Standard CAL.  The Enterprise CAL suite gives you the full feature set for each of the server products listed above, in addition to the Forefront Security Suite when it becomes available later this year.
 
"How much extra per user is the Enterprise CAL suite versus the Core CAL suite?"
 
While I don't know exact pricing, and you shouldn't quote me on anything related to pricing, I heard today that the Enterprise CAL suite is ~$1/FTE more than the current Campus Desktop with Core CAL offering is today.  Considering the standalone costs of the Enterprise CALs plus the Forefront suite, it's well worth the investment to unlock the additional features in these products.
 
If you are the licensing administrator for your department, I'd encourage you to consider bumping up to the Enterprise CAL suite on your renewal this year.  As your department looks at taking advantage of the central Microsoft services C&C is rolling out, it'll save you truck loads[1] on licensing.
 
For more information on the Microsoft CAL suites, check out http://www.microsoft.com/calsuites/default.mspx.
 
To renew your department's participation in the UW's Campus Agreement, get a hold of Dave McCone at mccone at u dot washington dot edu.  Do it fast -- the prices are going up after June 15th!
 
For information on what the Exchange Enterprise CAL license gets you, check out http://go.cac.washington.edu/go/?LinkID=19
 
Enjoy your weekend!
 
DZ
 
[1] Unfortunately, the size of the truck was undefined, so depending on your organization and your organization's participation in C&C's Microsoft services, you may save anywhere from a small Tonka truck's worth of cash to upwards of that semi that nearly ran you off the road last weekend.
Exchange5/18/2007 8:24 PMD. Zazzo
  
Good afternoon!  I'm starting to work on some initial designs around e-mail address spaces and how we can manage them in our centrally managed Exchange environment (codename "Everest"). 
 
Specifically, the scenario I'm looking at right now is the case where a individual user has more than one department relationship -- as an example, consider a user Joe Blow.  Joe Blow is an associate professor for the College of Pottery, but he's also the Vice President of the Department of Redundancy Department. 
 
In today's world, he has two separate mailboxes - one that's joeblow@cop.washington.edu, and another at joeblow@washington.edu.washington.edu, respectively.
 
My question to the masses is two-fold:  Do you have users like this today, and if so, how do they (or how do you recommend) handle this scenario?  Forward one to the other?  Configure both in their e-mail client?  Ignore both?
 
In a central messaging system, would you expect that Joe would be able to keep both of his departmental e-mail addresses?  Which one would be his "primary" e-mail address, or should his @u address be his primary?
 
Food for your Thursday afternoon thoughts.  Let me know.  Post your comments here, or send them to me at dzazzo at cac dot washington dot edu.
Exchange5/17/2007 1:16 PMD. Zazzo
  
We've been a little quiet here lately -- we'll try to get better at that.  I did want to sneak in a Friday afternoon post mentioning that I've uploaded two Exchange-related pictures today:
 
The first is a high-level diagram showing a version of the inbound message flow coming in from the outside world, and also shows the relationship between clients, the Client Access and BES servers, and the mailbox servers.  There's a bit of detail missing, but it should give you an idea of where we're going with this and what the new world looks like in Exchange Server 2007 (for those used to Exchange 2003's topology.)
 
The second image is a mockup image and is related to our work and discussions about mailbox provisioning and management - something I'll write more about here soon.  We're looking to the central subscription service (that I wrote about here) for handling provisioning and de-provisioning.  This gives the end user a nice one-stop shopping experience for managing their Exchange mailbox services as well as the rest of their central C&C services (like homer, dante, web publishing, etc.) The image is an example of what a "Manage My Exchange" page might look like as its tied into the rest of the manage pages (at https://uwnetid.washington.edu/manage.)
It's about time for lunch... enjoy your Friday afternoon and your weekend!
 
- dzazzo
 
Exchange5/11/2007 1:00 PMD. Zazzo
  
A short post this time, just to clarify some expectations around the Windows user delegation space. I probably should have noted these previously, but didn't.
 
Institutional Data
 
So as David just noted in his Centrally Provisioned User Attributes post, several attributes are automatically provisioned from institutional data. The existing provisioning process is not sufficient today to call UWWI anything more than loosely synchronized with the whitepages data, however, we're working on addressing that. And we have to also keep in mind that some number of folks opt out of the whitepages, meaning that we all must respect the user's decision.
 
There is an issue for some of institutional data where the quality of the data is poor. I mentioned this point in user delegation, chapter two in the 'some ramblings' section about the departmental data. That data is more or less just a string which doesn't have to correspond to anything in based in reality. In fact, you can go edit your department string right now via ESS. Make yourself part of the College of Pottery.
 
In addition to the attributes that David noted, there are some other attributes for which there is no existing institutional data, but there may be some data in the future.
 
Personal Data
 
Then there is some amount of data which is by nature personal, and really should only be populated by the individual. For example, the password. Or the ability to designate who can send/receive email/calendar invites via Exchange on your behalf.
 
Microsoft asserts there is a large set of attributes which by default each user can set themselves. This large set is broken down into 3 property sets, known as personal information, phone and mail options, and web information.
 
The Personal Information attribute set is:
aCSPolicyName
assistant
c
facsimileTelephoneNumber
homePhone
homePostalAddress
info
internationalISDNNumber
ipPhone
l
mobile
mSMQDigests
mSMQSignCertificates
otherFacsimileTelephoneNumber
otherHomePhone
otherIpPhone
otherMobile
otherPager
otherTelephone
pager
personalTitle
physicalDeliveryOfficeName
postalAddress
postalCode
postOfficeBox
preferredDeliveryMethod
primaryInternationalISDNNumber
primaryTelexNumber
registeredAddress
st
street
streetAddress
telephoneNumber
teletexTerminalIdentifier
telexNumber
thumbnailPhoto
userCert
userCertificate
userSharedFolder
userSharedFolderOther
userSMIMECertificate
x121Address
 
The Web Information attribute set is:
url
wWWHomePage
The Phone and Mail Options attribute set is:
None :)
 
Uh ... what about the overlap?
 
Astute folks will notice that there is an overlap between these two worlds of data. Specifically, we are setting institutional data on the telephoneNumber, streetAddress, and facsimileTelephoneNumber attributes, but that directly against AD any individual user can overwrite their own values in AD today.
 
This is obviously a problem from a consistency point of view, and will generate support questions like 'Why isn't my address correct over there?'
 
This is part of the problem space that I mentioned we are working on. Namely, there already is this issue of stuff being loosely synchronized today in UWWI. The solution to this must include the above issue. The current solution being considered for that near-real-tima person data synchronization component is Microsoft Identity Integration Server (MIIS).
 
Summary
 
So when I ask for input on user attributes which your Windows domain uses today which might enter into the problem space for "user delegation", keep in mind that this is a really small problem space by definition. It's any attribute where only (your) departmental staff are qualified to write to it. And implied within that qualification is that there might be a conflict as to what some other departmental staff think the values should be. I also tend to think it's implied that the attributes are unique to Windows, but that isn't entirely true (e.g. unixHomeDirectory).
 
It should not include attributes which might be part of a larger institutional set of data. Those might also be needed attributes in UWWI, but they are part of a slightly different problem set, and really issues with that data set should be solved at a different level than where us lowly Windows Engineers are working.
UW Infrastructure5/3/2007 9:44 AMNETID\sadm_barkills
  
As a follow up to the last post that Brian made about user attributes in UWWI and the user delegation challenges, I wanted to provide an update and try to start a list of attributes that we populate automatically as part of the user provisioning system.
 
Originally, we decided to populate the demographic fields on user accounts the same as we had been doing in the LABS domain.  The following table outlines those attributes and their predetermined values:
 
Original Attribute/Value Sets
Attribute Name
LABS Domain Value
UWWI Value
givenName
First initial
First initial
sn
Last name
Last name
displayName
combination of above
combination of above
uid
Decimal representation of user's RID
The official C&C UID number (homer/dante uid)
department Unix homedirectory path (for Catalyst Labs) not populated
mail netid@u.washington.edu netid@u.washington.edu
other unimportant ones (eduPersonAffiliation, uwRegID) N/A populated as appropriate
 
It's not much.  In Exchangeville, this makes the global address list (the GAL) pretty useless.  (There are 37 'J. Smith' entries in the whitepages alone - which 'J. Smith' were you looking for?)
 
The global address list works the best when it is an actual directory - an address list.  Searching based on name, department - all makes it worthwhile.
 
In anticipation of Exchange, I added some logic - what I call "Advanced Demographics" - to the user provisioning process.  (Credit:  This was made possible by the fine folks in C&C's Security Middleware Team and their recent enhancements to the Person Directory Service.  Without these enhancements, this post wouldn't be on your screen right now.)
 
The UW Person Directory Service now includes the whitepages information, as well as a helpful flag that indicates whether or not the user has opted in or out of whitepages.  This enables us to safely populate Active Directory with the whitepages information without stepping on privacy and other compliance issues (like FERPA, for example).
 
So, taking that into consideration, "Advanced Demographics" now populates the following attributes with these values:
 
Advanced Demographics Attributes and Values:
Attribute Name
UWWI Value
displayName
FirstName LastName
sn
Last Name
givenName
First Name
initials
Middle Initial, if available
department staff: whitepages department
student:  primary major (if avail)
title staff: whitepages job title
student:  class rank (Senior, Junior, etc) (if avail)
telephoneNumber staff: whitepages phone number #1
student: whitepages phone number
streetAddress staff: whitepages address line #1
student: not populated
physicalDeliveryOfficeName staff: whitepages campus box
student: not populated
facsimileTelephoneNumber staff: whitepages fax number
student: not populated
mail staff: whitepages e-mail
student: netid@u.washington.edu
(in this table, 'staff' == staff/faculty)
 
These improvements will help flush out the Exchange GAL to make it more usable to the users of the service, and at the same time, provide added value to the SharePoint project and the future Microsoft roadmap services that we intend to deploy over the next 18-24 months.
 
At the moment, existing user accounts will only get populated with advanced demographics when an event occurs on the user account -- either a password change, netid rename, or the activation/deactivation of the Catalyst Labs subscription.  We're actively investigating methods to get this information updated on a near-realtime basis from the whitepages and the other directories that we consume.  There's a variety of strategies that we're considering, and no doubt it'll prove to be food for many a blog post to come.
 
Lastly, before I hit 'Publish' ...
 
Right before I started writing this, I read through the comments on Brian's last post about the user delegation -- and as always, I do want to restate what I hope is the obvious: that your input is invaluable to us.  Helps keep us on track with what you want out of these services that we're working hard to deliver.
 
Keep the feedback comin'.  Often.
UW Infrastructure5/3/2007 12:10 AMD. Zazzo
  

Six weeks ago, there was a long post here on some of the thoughts we have around how to solve the user delegation issues in the NETID domain.

I'm back today to share our latest thoughts, and to solicit some input. As you might have guessed based on the MS Roadmap, C&C is looking at moving Nebula into the NETID domain. So we need to look at how to solve some of the user delegation issues on a quicker timeline than we've scoped on the roadmap. In some cases, we'll use temporary solutions which will be replaced by more robust solutions, and in some cases we'll be blazing the trail for the enterprise solution. So we've had to step up the intensity of our thinking on this issue. Which is why I'm here to solicit some input from y'all.

The INPUT

I'm compiling a list of all the user attributes that UW Windows administrators currently use to get an idea of the breadth of the problem space. Please send me a list of the Windows user attributes your domain currently populates in some fashion. If the use is not self-explanatory, a short description is appreciated.

Here's a list I've compiled so far, with a short description of what each is:

homeDirectory
The path to your ONE Windows home directory.
homeDrive
The drive letter to map your home directory to.
scriptPath
The path to your login script.
department A single string representing the ONE department you belong to.
accountExpires The date your account should stop allowing new logins.
servicePrincipalName Kerberos Principals which can be used in association with this user.
userWorkstations The list of Windows computers this user is restricted to logging into.
msDS-AllowedToDelegateTo
The list of Windows services which can impersonate this user.
profilePath The path to your user profile.

So again, send additions to me, or make a comment here.

Our Latest Thoughts

There are only a couple attributes where we see a potential for conflict on a wide scale. These are the homeDirectory, homeDrive, scriptPath, and profilePath attributes.

We're observing requirements that suggest that delegating the entire user object to administrators across the university might not be such a good idea. Some examples of those include the need to identify which admins are authorized for any given user, the need to only permit write access to a small subset of attributes (most Windows user attributes should be fed from institutional values, as they are not specific to Windows), a need to satisfy FERPA and privacy requirements about who can read information about users, and a need to have a process by which we can grant some level of permission for all user accounts (whether it's read or write) to "service" accounts which have an institutional business need to do so.

So to put things in the context of my past post, we're leaning towards the "b" solution. In other words, a webportal or other tool which doesn't involve actual permissions to admins across campus. The other tool is an infrastructure of functionality that C&C has had for quite a long time called subscriptions. You might not know it, but your uw netid has quite a few subscriptions associated with it. There are subscriptions you are automatically permitted to based on what kind of uw netid your have. There are subscriptions you can be permitted to based on some admin explicitly granting you. And each of these subscriptions can have details associated with them. You tie into this subscription system when you go to the 'manage my uw netid services' webportal. When a subscription is permitted and enabled, a subscription event is generated, and any subscription management listeners that care about that subscription pick up the event and can act on it. The UW Windows Infrastructure is already using this subscription infrastructure today--in fact, David described it in a post over on the winauth blog. So we're looking at using this toolset to help solve the user delegation issues.

Let's take the example of the homedirectory to illustrate what we're thinking here. Nebula will get a new subscription called 'Nebula homedirectory'. Nebula Support personnel would use a tool to permit Nebula users for this subscription. A background process would take that permit and automatically enable the user for that permitted subscription. A listener picks up the enable event, sets the homeDirectory and homeDrive AD values, and provisions the home directory. Now ... say another group comes along with a Windows home directory service. Let's say the college of pottery (COP). COP requests a new subscription for their Windows home directory service. Some users of COP also happen to be Nebula users. Conflict, right? So in this case, the first 'homedirectory' subscription has been permitted and enabled, but the second homedirectory subscription has only been permitted, but not enabled. So while there is a conflict of services offered to the user, there is no conflict in AD. The user can then self-select one of of the many home directory subscriptions they are permitted via their 'manage my uw netid services' webportal page. Upon making a different selection, events that enable the new subscription and disable the old one would go out.

Some Ramblings

This solution won't work for all the attributes above, but it is a neat one for some of the most widely contentious ones. And for the others, I've got some ramblings below.

Writing to msDS-AllowedToDelegateTo is a domain admin only activity by Microsoft design. I suspect this is going to need to be a help@cac request kind of thing, where C&C does the actual deed upon request.

We are populating the department attribute from enterprise whitepage data. This data is not normalized, nor consistent, nor neccessarily accurate. But it is the institutional data of record. Nebula is looking at moving it's need for custom values here to a database.

accountExpires (and account disable) is a lost functionality, because of the way uw netid kerberos service works. Nebula needs to make a lot of changes to replace this functionality with service specific ones. I imagine if you use either disable or expire, you'll also need to look carefully how you "deprovision" the various services you provide to the set of users you work with. Group membership deprovisioning is likely the primary solution here.

Nebula uses both the servicePrincipalName and userWorkstations attributes only on "service" accounts. Setting additional values of the servicePrincipalName can be needed to enable kerberos functionality in association with delegation/impersonation scenarios. Service accounts are usually accounts with shared passwords--several Windows admins will know that password--and often they have a wide scope of permissions. In general, Nebula limits the exposure of these accounts by only permitting them to login from a small set of computers. We're working with the uw netid team to identify whether they need a new netid type for this kind of account and how best to retain these Windows functionalities.

UW Infrastructure5/1/2007 12:53 PMBrian Arkills
  
So ... as with many of my posts, this was spawned by an email thread with someone on campus.
 
In general, I'll start out by mentioning the 'how to use uwwi' document at http://www.netid.washington.edu/documentation/howToUse.aspx.
 
Lots of provisos there. And there are a bunch of highly relevant FAQs in a 'common scenario' document at http://www.netid.washington.edu/documentation/faqCommon.aspx.
 
To summarize the question posed to me:
 
We're planning to be early adopters of some of the services C&C has plans to offer. Couldn't we replace all our existing domain user accounts with UWWI (NETID) user accounts during the summer break? We'd leave all our other stuff--groups, computers, etc. in our domain. Seems like this would reduce the impact on our users.
 
To answer that question, you have to delve into a bunch of stuff.
 
If you note the roadmap details carefully, we agree with the strategy of unifying the user accounts--that's partly why Nebula is migrating before adopting the other services in the roadmap.
 
However, providing a blueprint for migration into UWWI is not on the roadmap until later, after we've had a chance to subject ourselves to the experience via the Nebula migration (see roadmap item 3.10). There are a ton of preparatory details involved, even for a partial migration--only adopting the user accounts, and not moving groups, computers, etc. For just user account migration, some tools will be required.
 
For example, one pre-requisite is to reconcile your existing domain users with the uw netid namespace. A detail within this space is getting the netid type (yes, there are different types of uw netids) correct for each user account. For example, your admin accounts and service accounts will need special netid types. And for admin accounts, today there does not exist the right netid type (that's in process and not available today even to us).
 
Many other details here--you'd want to get sidhistory set, and we aren't prepared to handle that process. And I don't think there is a good early solution to the roaming profile issue.
 
For the longer term, if you want to start preparing for a full migration, I'd suggest looking at:
-netid reconciliation for user accounts
-netid reconciliation for groups (group names use samaccountname for uniqueness, which means none of your group names can conflict with any valid uw netid)
-implementing a replacement for 'domain users' that's specific to the set of users you provide services to
-looking closely at anything you do with login scripts
 
Wait a minute, you say, we already use netids as our usernames ...
 
Nebula also matches users to netids. But we have a somewhat large number of user accounts for which there is no netid. These are not personal user accounts, but rather service accounts, admin accounts, test accounts, and the like. Without a uw netid, there is no user account in the NETID domain. So before we migrate, we have to get uw netids for all the accounts we want to migrate.
 
Wait a minute, you say, all my group names are longer than 8 characters, so there can't be a netid collision ...

On the group reconciliation, personal uw netids are limited to 8 characters. But personal uw netids are only one of many types of uw netids, and the other types are not limited to 8 characters. Nor are they all limited to alphanumeric characters. There are test netids, supplemental (i.e. sponsored or "shared") netids, application netids, reserved netids, and soon to be admin netids. We're also talking with the netid team about whether we need a new type of netid for Windows service accounts or not, or if they fit into one of the existing types. There are a lot of 'how does this fit into the existing infrastructure' kinds of conversations going on, that are not necessarily issues that are obvious at first glance. And these are exactly the kinds of things that having Nebula migrate first should expose as things that need resolution before inflicting them on others.

Anyhow, getting back to groups and netid reconciliation, imagine this scenario: nebula imports a group into UWWI. A year passes. Someone requests a uw netid with the same name, and since there is no existing netid with that name it's granted. That netid's name collides with the nebula group's name. The kiwi client which provisions netids into UWWI would throw an exception (especially because the object with a conflicting samaccountname isn't even a user object that it can "take over"), and we're all left scratching our heads. So obviously there needs to be a plan here. The current approach to get Nebula shoe-horned in rapidly is to get a reserved netid for every group. But that is probably not the most ideal resolution long term. Hopefully the group directory service (GDS) will weigh in with something that helps reduce the size of this problem before we need to migrate others.

The future of group management and the group namespace is somewhat up in the air right now (see roadmap 3.5), and it could be that the group <-> netid reconciliation is mostly handled by the decisions within the future projects.

What's that sidhistory thing? I don't think I need that ...
 
As you know, ACLs hold SIDs, not user or group names. Most ACLs contain groups (i.e. the sid of a group) as the delegate with permissions, and simply changing the membership of your groups gets you pretty far without a need to change all the ACLs throughout your enterprise. However, there are some unknown number of ACLs throughout your Windows resources which most definitely contain users (i.e. the sid of a user) as the delegate to permissions. Exchange mailboxes and home directories definitely. Likely there are a number on individual workstations. And so on. Replacing all those is a big job--especially when you don't know where they all are. And the timing issues make it even more dicey.
 
That's where sidhistory and tools like ADMT come in. Sidhistory allows the SID from one user account to be appended to another user account. Then that other user account can access all the resources the original user account had access to, without any need to change ACLs. Sidhistory also works for groups. The beautiful thing about sidhistory is that the timing issues associated with a migration scenario become very simple to manage. Of course, you do eventually want to clean up all your ACLs to resolve to the "new" user account (and groups). And ADMT automates that entire process for you. There are other approaches and tools, but using sidhistory and ADMT are the Microsoft recommended path for accomplishing this kind of thing. So every migration will use sidhistory.
The timing of all of this may not be ideal, but we'll do our best to work with folks to provide what is possible.
UW Infrastructure4/16/2007 1:09 PMBrian Arkills
  
First, I don't think any alerts went out on my last post. I suspect because I saved the post as a draft, then published it afterward. If so, that appears to be a bug with the sharepoint alerting for the blog template. So check the blog for that post ... it's a doozy. :)
 
Finally, the long awaited roadmap has finally been posted on a public website. You can grab a link to it from  http://www.washington.edu/computing/msca/ or go directly to it: http://www.washington.edu/computing/msca/MS.Collab.Roadmap.pdf
 
I'll remind folks that this is a tool intended to scope the work at a high level and the dates noted are *not* any indication of when such services might be completed. In fact, I can say that almost none of the work shown has started or will start at the time indicated. As I noted a couple posts ago, project charters are being written now for the early projects, and that will lead to resourcing, and then actual work.
 
Anyhow, enjoy the roadmap, and please feel free to post comments here or send an email to ms-roadmap #AT# cac \dot\ ... well, you know the rest, but the web bots sucking down email addresses from this website right now might not.
Engineering3/19/2007 9:27 PMBrian Arkills
  
Earlier this month, there was an extended thread on organizational unit hierarchy and design on a mailing list I help manage. I posted to that thread some of my thoughts, which spawned an interesting thread with one of you which happened to be on that mailing list. That person, Dave Lange, has agreed to allow me to post the thread here.
 
I thought the discussion of great enough interest to share here, however, I will preface this discussion by noting that this work is likely a year or so out. That said, this is exactly the kind of discussion we need to be having now, so that we get a broad base of agreement, and have adequate time to think through all the implications and components needed.
 
Please, please feel free to jump into this thread. You should be able to post comments on this blog entry, provided you login with your UWWI credentials. For example, I'd type 'NETID\barkills' to login. We wish we could provide anonymous comment functionality, but we've already seen spammers hit us, so we can't do that anymore.
 
Now, onto the interesting topic ...
 
-----------------------------------------
<My original post>
 
Simple is best, however, I don't think that a single OU for all users is necessarily the end point that is workable for all universities. In fact, I'm a bit surprised that it works for so many, and so I'd like to understand why it does. I'm also motivated by the fact that our university is embarking on a set of large Microsoft-oriented projects where we need to find a creative solution in this area.
 
If you choose the single OU for users model, then you need to provide a mechanism for setting user properties attributes) that are what I'd call windows-centric--that is properties that have no practical value anywhere other than Windows. That mechanism can be:
 
a)      Central help desk.
b)      Web portal or similar tool.
c)      Loopback group policy
d)      Some new approach that I've never heard of
 
Each of these mechanisms has drawbacks. They are:
 
a) Lacks a bulk mechanism for any given school/department to apply a set of property values to a set of users.
 
b) Depends on how tool is written, and whether it targets end-users or Windows admins. If it targets end-users, then the drawbacks are the same as a) above. If it targets Windows admins, then the drawback is that the Windows admins need to learn a different tool than the industry standard one(s). Another drawback regardless of target audience is that you have taken on maintenance of a custom tool for the service lifetime.
 
c) Doesn't really provide a mechanism for setting user properties; provides a mechanism for setting user group policy settings. Lacks the ability to set a number of user properties, include user profile, home directory, exchange settings, etc.
 
Of course, your university may not have an issue with those drawbacks. Is that the case with all the universities who have chosen this model? Or is it the case that you have no other choice? Here at the U of Washington, we lack a fully formed organization chart, and the best organization data associated with people is a budget code for who pays their salary, so any sort of automated "sorting" into delegated user OUs is more or less out the window.
 
We're considering all our options, and have come up with a couple of them.

But first, I'm gonna backtrack. For any of these solutions, there's another problem that needs to be addressed. That problem is authoritatively determining who holds the "windows admin" role at your university for any given organization/department. That's a problem that needs to be solved before you can delegate to anyone outside your central IT, unless you decide self-service is adequate. So, let's assume you've solved that, and have a authoritative list of those in the "windows admin" role for departments and have a mechanism for adding/changing that list.
 
One option we've come up with is an adhoc solution. The adhoc solution is where the folks on that "windows admin" list get to assert (in a free-for-all) who the users they manage are. Those Windows admins can request we create and delegate a user OU for their department, and once created they can use some bulk tool we provide to "claim" users into their OU. The downside is that this is an iterative process which places responsibility and work on the shoulders of those departmental windows admins. And they'll need to work out any conflicts between themselves (I personally don't put much stock in the conflicts--the user themselves
usually has an opinion that resolves these cases). However, we have some experience with exactly this type of solution already in our managed workstation service, where our frontline support folks have to claim users before being able to manage them.
 
Another option we've come up with is the single OU (option b above), in a self-service end-user flavor. In this option we addresses drawback a) by providing departmental "windows admins" a mechanism for defining a template of user property values. The end user then can opt into a departmental template via that webportal. This option seems particularly appealing to the student case, and might be used in combination with the adhoc solution above.
 
I'm very interested in other ideas. I'm also very interested in hearing how those schools who have a single user OU dodge the drawbacks I've noted above (although I have a sinking suspicion that the answers won't be applicable to our situation or the answer will be "we tell departments 'tough luck'").
 
-----------------------------
<Dave Lange's reply>
 
Just to give you my opinion on what you posted for a potential UW enterprise structure. I think it is more appropriate to send this directly to you and not clog the mailing list.
 
I want to be able to request an OU, part of the request should be naming my associated Comp Director and who my partners in crime/subnet contacts/fellow admins will be. I want the ability to create temp user accounts in this space, but permanent accounts should come from a common UWNetID pool. I don't want to lose one of my clients just because another OU admin has already branded them.
 
It should be the user responsibility to pick the OU they wish to associate with and if y'all will host a template for us that a user can pick and allow them to set their windows attributes from the current myuw pages. We work closely enough with the users this shouldn't be a problem for us during their introduction. With some staff and faculty belonging to 3 and 4 OUs it appears harder to document who is their lead OU over time.
 
Leaving users in a common UWNetID container also allows C&C to manage them for other central Windows services (hopefully VPN is on the sooner as opposed to the sunset cleanup).
 
--------------------------------
<My response>
 
Thanks for the feedback. :)
 
So to summarize, you'd be happiest with a user-only-directed process, where each user opts in to a department/org's management for their UWWI user account, and departments/orgs (and their support staff) have no ability to opt users into their management? In other words, no bulk selection tool.
 
I agree that ultimately the user should decide who manages them, but I'm not sure if a self-directed process will fly on a large scale, and think that we will need some capacity to allow bulk selection of user management. But time will see ... :)
 
Again, I really appreciate the feedback, and I believe there will be more opportunity for input and feedback as the roadmap moves from vision to projects to reality.
 
----------------------------
<Dave's response>
 
Yep, a little concerned what happens if the user gets the wrong template (is there a security issue there?). Maybe the OU Admin can opt in for an email (and which email address) when a user opts in to our OU or leaves our OU?
 
Bulk Selection Tool sounds cool, just not sure what each of our users attributes match before the opt-in. The UWare folks are working with the Middleware group about using the Payroll Coordinator, but even that is a little big for us.
 
As you say, there should be plenty of chances for inputing more feedback in the future. Just let me know when I have sent you one more email than you want from me!
 
----------------------
<My response>
> Yep, a little concerned what happens if the user gets the wrong template (is there a security issue there?). Maybe the OU Admin can
> opt in for an email (and which email address) when a user opts in to our OU or leaves our OU?
 
Hm. You've got an interesting point here. I can see where it would be very useful to know when a user enters or leaves your management. Maybe that mechanism can be built into the process. Within Nebula, when a new user shows up, it must be "claimed" by one of the management groups. And for that user to switch management groups, the management group must put the user back in the unclaimed container. So with that paradigm, there is always an awareness. But if the user is involved in the "claiming", it opens the possibility to where the management group might be unaware of what is happening.
 
In terms of security issues, there may indeed be some security issues, although I would think those issues wouldn't be of special concern if we decide the user should have the only say on the choice. The type of security issues I would imagine aren't because of info on the user object, or some ability to set a subset of user attributes, but rather more about what group policy settings another management group might (or might not) apply to a user, and how that might expose the user to a greater risk.
 
And that leads me to explain more details ...
 
In terms of what the delegation of the user would consist of, I do not imagine a delegation of "full control" scenario. There are too many issues with such a scenario--logistics, process, and security related issues. We don't want to give folks the ability to override institutional data at a layer lower than the place where that data is kept, so we can't grant access to any of the attributes we are setting programaticly. There are also many user attributes which are simply not writable (operational attributes) or which have possible security implications (e.g. userAccountControl, userCert, userCertificate, etc.). There are a largish set of attributes which are in a grey area in my mind. These include exchange-related attributes, attributes which belong to the "Personal-Information"property set, and the attributes in the "Public-Information" property set which aren't already covered above. Then there are the attributes more commonly used by Windows admins, which I clearly see us delegating. These include the attributes in the "User-Logon"
property set (e.g. homeDirectory, homeDrive, logonWorkstation, profilePath, scriptPath) as well as a few random attributes such as unixHomeDirectory. There would also need to be the ability to delegate custom attributes (e.g. I foresee Nebula users needing several custom attributes to store existing info we currently put on attributes which really should be populated with institutional data--even if we don't currently have that data available today, department being one of those Nebula attributes). Finally, there would have to be some ability to move the user objects within your OU hierarchy. And so if we went a mostly user-self-selected "template" route, each management group's template would only include this last set of attribute values.
 
These details are not immutable, just our best working guess at this time. I'd be very interested in feedback, and if there are attributes you normally set which wouldn't fit into this kind of delegation scenario.
 
If you need some reference material on what attributes are on the user object, or what the property sets are, refer to
https://viewpoint.cac.washington.edu/blogs/winauth/Technical%20Docs/userClass.htm.
> Bulk Selection Tool sounds cool, just not sure what each of our users attributes match before the opt-in. The UWare folks are working
> with the Middleware group about using the Payroll Coordinator, but even that is a little big for us.
 
Yeah, that's a very good point. I think it'd be crazy for us to try to customize a bulk selection tool to be able to support each organization's differing definitions of who is in their org. It'd be a never ending development process. We might be able to support a single function (e.g. you pick a budget code and we give you a list), but I'm mainly thinking that the org would paste in a bulk list they have compiled using some other method outside of the tool.
 
We've been thinking that middleware's astra role definitions may have some value to resolving all these issues, but they would only provide a small piece of the overall solution. I've been thinking that the set of campus "windows administrators" would somehow be defined in astra by those folks defined in astra to hold some appropriate role within their department.
 
> As you say, there should be plenty of chances for inputing more feedback in the future. Just let me know when I have sent you one
> more email than you want from me!
 
<grin> The fact that you are aware of the fact that you might be imposing is a solid indicator that you won't be; most folks who do impose aren't aware of their impact. Anyhow, I'm very interested in hearing feedback and getting input. We have just been shackled by the lack of resource commitments, and so on. It looks like that hurdle is almost behind us, so I'm hoping to heat up the discussions soon. In fact, I might post some part of this email conversation to the mscollab blog, if you don't mind.
 
-----------------------
<Dave's response>
 
We haven't defined any extra attributes here, honestly reasonably new to AD. I'm used to seeing Central Computing groups populating Directories with Peoplesoft and Oracle like data warehouses so I can understand not all attributes will be accessible. Nebula is hopefully an extreme example of what a departmental OU would have to control although delegated Exchange from a common user pool is another example.
 
It seems like there will have to be a tool that translates from user input (web provisioning?) into protected attributes (the fuzzy grey area) done with permissions that the user can't directly use.
 
Feel free to blog, your choice if my name or initials show up or if its more anonymous.
---------------------------
 
Got an idea that would help solve the issues in this space? Have some feedback on the ideas presented?
 
Please, please feel free to jump into this thread.
UW Infrastructure3/13/2007 11:31 AMBrian Arkills
  
So the UTAC meeting this week at which Scott Mah was to present the roadmap is rumored to be cancelled. However, C&C's executive council did approve the framework a week or so ago, so we have some progress in terms of the document's lifecycle.
 
Scott Mah is pursuing alternate ways to getting the roadmap into the UTAC member's hands in a timely manner in the next couple weeks.
 
Sometime after that, we should have a copy or link to the roadmap here for your perusal, comments, and feedback.
 
The C&C team is busy writing the various project charters (i.e. proposals) so that resources can be identified, and we can get to starting the work.
Engineering3/13/2007 8:56 AMBrian Arkills
  
A little while back, C&C launched an initiative to jointly develop a roadmap for services based on Microsoft products in collaboration with our campus partners, which hopefully can then be used to launch projects to implement each of the components in that roadmap.
 
The roadmap is intended to sketch out the timing of the logical progression of work required, and when service offerings might become available. It's a high-level overview, lacking functional specifications or detailed project charters. The roadmap is intended to be used as a starting point for discussions with campus partners. Keep in mind that the roadmap isn't a promise to deliver specific services in the indicated timeframe, but rather a tool to focus discussions on campus needs around specific products and timing, and the related resourcing requirements associated with each.
 
The Exchange portion of that roadmap has moved to the place in its lifecycle where it is now being circulated. The IT Resource Sharing group has seen it, and I believe the Computing Directors group will see it this week.
 
As of today, tracks in the roadmap for campus consumption include:
  • Exchange
  • Sharepoint
  • Live Communications Server
  • Group Management and Delegated Authority
One point of note is that this roadmap is scoped to Staff and Faculty.
 
As I noted earlier, resourcing needs are still being examined, but some of the foundational work needed (e.g. moving Nebula into NETID) has been funded by C&C. Work beyond that will need additional resource expenditures.
 
If you have any questions or comments at any point during the overall process, please feel free to send them to ms-roadmap@cac.washington.edu or post a comment here.
Engineering2/20/2007 3:15 PMNETID\sadm_barkills
  
Over at https://viewpoint.cac.washington.edu/blogs/ms-collab/default.aspx is a new engineering-focused blog for the Microsoft Roadmap initiative that you may have heard about.
Engineering2/20/2007 11:51 AMNETID\sadm_barkills
  
Just a note to tell folks that C&C is actively working on creating a roadmap for several Microsoft product offerings. More info when there's something more coherent to say.
Engineering2/15/2007 12:40 PMNETID\sadm_barkills
  
Just a note to tell folks that C&C is actively working on roadmaps for several Microsoft product offerings. More info when there's something more coherent to say.
Engineering2/15/2007 11:50 AMNETID\sadm_barkills
  
If you've been watching the blog still over the past day or so, you may have noticed that this site was replaced by a placeholder indicating that it was unavailable.
 
We ran into a roadblock when we attempted to upgrade the blog to the released version of Windows SharePoint Services - apparently, we thought we were running Beta 2 Technical Refresh (we weren't), and when we went to complete the upgrade, SharePoint politely told us that our database wasn't restorable because it wasn't the right version.
 
Uhh .......
 
So, thanks to our Microsoft contact, we were able to rip out the RTM bits, reinstall Beta 2, upgrade to Beta 2 TR, then upgrade to RTM.  It's been a fun ride.
 
All of that to say, we're back online again.  It's nice to see daylight once more.
Sharepoint1/3/2007 2:49 PMNETID\sadm_barkills
  
Well, I guess our blog has gotten popular enough to have been hit by the good ol' comment spammers wanting to sell us something to enhance our anatomy. 
 
I miss the days where the Internet was a brand new frontier ... it was fun and exciting - this brave world that only the folks interested in the bleeding edge were part of.
 
These days, I'm sure my cats could get their own ISP accounts, web pages, blogs, and podcast, and they barely know how to do much more than eat and sleep and hack up the occasional hairball.
 
Ahh, but I digress.  I've gone ahead and turned off anonymous commenting, meaning that you'll need to log in with your UWWI account (hey - unintended plug for the service!) in order to post a comment here.  If you're not an affiliate with the University, then my apologies in advance for not allowing you to post your well-intentioned comment.
 
Now, back to your regularly scheduled programming.
Engineering11/7/2006 12:00 PMNETID\sadm_barkills
  
.. the moment you've all been waiting for .. the UW Windows Infrastructure Service is now open for business!
 
Here's the official announcement:
----------------------------

I'm happy to be able to announce that the "UW Windows Infrastructure"

(which you may also know as "Win-auth") service is now ready for use.

 

The UW Windows Infrastructure (UWWI) provides automatically-provisioned Windows user accounts(hereafter referred to as UWWI user accounts) that correspond to UW NetIDs. After successfully obtaining a trust and configuring your Windows domain-based resources with the appropriate access controls, you will be able to tell clients to login with UWWI user accounts to obtain access to those resources.

 

For more information please see:

 

<http://www.washington.edu/computing/support/windows/UWdomains/uwwi>

 

Note especially the "How to Use" section from the left column of that page.

UW Infrastructure9/18/2006 10:30 AMNETID\sadm_barkills
  
I've added two more links to the blog links section:
 
UWWI Group Policy
UWWI Schema
 
There isn't a lot of new information in those documents--it's mostly wrapping up technical documents we posted here on the blog in the past.
 
However, there is a bit of new info there. The group policy objects are up to date. The schema doc talks about our approach to schema changes, and outlines some schema best practices.
 
The schema doc is probably not especially relevant to many folks. The group policy doc is more relevant, but mostly just to understand the security settings we've employed.
 
In any event, we're winding down on the documentation writing front. I'm thinking that many folks would enjoy a picture of the directory structure, so I'll probably publish something like that when I figure out what the right amount of detail to show is (and if I can convince my coworker David, who has more skill in the graphical arts than I, to make a fine looking picture).
 
If you've got some feedback on the various documents we've written, have a suggestion for some new documents, have a question you think should be in the FAQ, or generally see some weakness in the documentation we've got, please let us know.
UW Infrastructure9/1/2006 8:45 AMNETID\sadm_barkills
  
I've just pushed a new document up to the documentation site.  This document, "Using LDAP to Enumerate Large Groups in UWWI" discusses a scenario that Brian and I have run into while writing our account management code.
 
In short, many of the groups that we're dealing with are large.  Very large.  > 100,000 members. 
 
Windows, and specifically, Active Directory, has a default page size of 1500 - for both entries in a search, and for values in an attribute.  What this means, is on any given search, if the total number of results is > 1500, Active Directory will return you 1500 at a time.  It's up to the developer to request more sets of 1500 to 'complete' the search.
 
In the .NET Framework, this limitation / condition isn't obvious.  There's a particular call that we rely on - DirectoryEntry.Properties[ "member" ].Contains - to determine whether or not a given user is a member of a group.  What we discovered, is that the .Contains call only deals with the current set of values returned - so, for any of the affiliation-based groups, the working set of .Properties[ "member" ] is 1500.
 
So, as I'm sure you can imagine, a problem arises when the value you're checking for is the 1,501st value -- it's not in the set of 1500, so the .Contains call returns false - "no, it's not in the current set of values for that attribute" - of course, it is, but we don't see it.
 
Based on the false "false" answer, our code tries to add that user to the group.  When we go and save the change, Active Directory knows better, and causes us to throw an exception and bail out - of course, you can't add a user to a group they're already a member of!
 
So, we (well, actually, Brian) looked and found some sample code that dealt with this case.  In Active Directory (and probably LDAP-at-large) there's a concept known as ranged attribute - a way for us to tell Active Directory whichset of 1500 we want to get.  Ahh!  Now we can check the first 1500 for the user, not finding it, get the next 1500, and so on until we either find the user, or we never find the user, so we know its safe to add them to the group.
 
Thinking this would be useful to more than just us, we've rolled it into our documentation.  I've provided a code sample in C#, and Brian provided a sample in Visual Basic .NET.  We use a couple of variations on these samples in our stuff, but this is a good general case.
 
UW Infrastructure8/31/2006 11:43 AMNETID\sadm_barkills