Even though this KB article isn’t very reader friendly (it contains 28 different items!), it contains good information. So whenever you bump into an OM12 Agent which doesn’t work as expected, this KB is the place to start your investigation.
Friday, April 26, 2013
Tuesday, April 23, 2013
This tool enables organizations running SCVMM 2012 SP1 to use it even better. It integrates with it analyzes your Hyper-V environment for issues and then provides recommendations to tune the virtualization hosts and virtual machines for better availability and performance.
Taken directly from the website:
Vital Signs for VMM looks for problems that include:
Want to know more? Go here.
However, exporting a Runbook and importing it into another Orchestrator requires some additional attention. So many times a third Orchestrator environment is needed in order to ‘sanitize’ the exported Runbooks before importing them into the production environment. I already blogged about it, to be found here (last bullet).
However, it involves many steps, preparation and some good discipline as well. Otherwise the Orchestrator environment used for sanitizing exported Runbooks isn’t ‘sterile’ anymore, resulting in Runbooks with unexpected behavior.
Therefore it’s good news the Parse Orchestrator Export Tool is finally available for download! What it does? Taken directly from the SCORCH Dev blog: ‘…is a GUI application that looks at exports from your Orchestrator environment and allows you to perform multiple actions on them before re-importing them into the same environment or another Orchestrator Environment…’
Primary functions (also taken from the same blog):
So this tool is a MUST HAVE for anyone involved with Orchestrator. Good to know this tool is available for FREE.
This posting is an aggregation of these postings and other useful links I found on the internet while I was taming the SharePoint Server 2013 MP.
- Configuring the SharePoint 2010 MP
This is a Microsoft KB and aimed at configuring (read: troubleshooting ) the SharePoint 2010 MP. Since the SharePoint Server 2013 is almost a one on one copy of this MP, this KB contains good information for anyone running the SharePoint Server 2013 MP.
- Advanced Troubleshooting of the SharePoint 2010 MP
Even though this posting is aimed at troubleshooting the SharePoint 2010 AND the described solution DOESN’T WORK FOR THE SHAREPOINT SERVER 2013 MP, it’s an excellent blue print for how to troubleshoot an unresponsive SharePoint MP.
- Troubleshooting SharePoint Server 2013 MP
Describes why the SharePoint Server 2013 MP is unresponsive and how to solve it.
- Configure SharePoint Management Pack Task fails with error
Describes the failing of the Task Configure SharePoint Management Pack and how to solve it.
- Monitoring the SharePoint databases
Describes how to enable monitoring of the SharePoint databases by the SharePoint MP and how to configure the ConnectionString per SharePoint database.
Hopefully these links will assist you in making the SharePoint Server 2013 MP function properly.
The related MP guide doesn’t say anything about it and on the internet there isn’t much to be found either. Even worse, on the internet there are some blogs which make statements which aren’t true at all, like this one (freely translated): ‘…when the SharePoint MP ‘sees’ the SQL MP is in place, it won’t monitor the related SharePoint databases because these monitors are exactly the same…’.
And this isn’t true at all! It’s crap! Excuse my French but I really don’t like postings like these. Simply because they’re assumptions at their best, sold as statements. Therefore I have decided to write this posting in order to tell the whole story about monitoring the SharePoint databases with the SharePoint MP. There is much to tell so let’s start.
The Basics – The Classes
First and foremost, without a Discovery there won’t be any monitoring. The Classes related to the SharePoint databases (SharePoint Database, SharePoint Configuration Database and SharePoint Content Database) are defined in the Microsoft SharePoint Foundation Library MP.
The base class for the SharePoint Database is the System Database which is defined in the System Library MP, which is a core MP of SCOM. The SharePoint Configuration Database and SharePoint Content Database classes use the SharePoint Database as the base class.
As you can see, there isn’t any dependency with the SQL MP at all!
The Basics – The Discoveries
By default, the discoveries for the SharePoint databases (SharePoint Database, SharePoint Configuration Database and SharePoint Content Database) are turned on.
Basically meaning when the SharePoint Server 2013 MP is configured properly and the SharePoint Farm is discovered, the SharePoint databases will be discovered as well.
Okay, so now we have the SharePoint farm in a monitored state with the related SharePoint databases discovered. However, these SharePoint databases do have an unmonitored status.
And yes, those databases are covered by the SQL MP itself but wouldn’t it be nice to have the SharePoint databases covered by the SharePoint MP as well? If only to make the SharePoint admins happy by delivering them a scoped View within SCOM, covering their SharePoint Server 2013 farm(s).
The SharePoint MP runs only ONE Monitor for monitoring the SharePoint databases. Which is understandable. No need to create a second SQL MP within the SharePoint MP. So the SharePoint MP covers only the availability monitoring of the SharePoint databases, done by the SQL Database Connection Failed Monitor, which is turned off by default. Hence explaining why the discovered SharePoint databases do have an unmonitored status.
When taking a look at this Monitor, the Description of it explains why certain blogs on the internet share the earlier mentioned untrue statement (…when the SharePoint MP ‘sees’ the SQL MP is in place, it won’t monitor the related SharePoint databases because these monitors are exactly the same…):
Description: A critical state of this Monitor indicates that a SQL Database connection attempt failed for a specified connection string. Note: This Monitor is disabled by default, enable this Monitor if you want this Management Pack to monitor the SQL Database Connection for SharePoint 2010.
It also shows the SharePoint Server 2013 is almost a one on one copy of the SharePoint 2010 MP…
However, by simply placing an override on the Monitor to start monitoring the SharePoint databases won’t fit the bill here. Yes, monitoring will take place but within 1800 seconds (30 minutes), the interval of this Monitor, the SharePoint databases will get a critical state!
How to make the SharePoint databases healthy in SCOM
Reason why the monitored SharePoint databases enter a critical state is based on the way the related Monitor functions. It uses a connection string which requires additional configuration in order to make it work.
The default connection string simply doesn’t work for 9 times out of 10. (Personally I have never seen it work, but hey there are many SCOM environments I haven’t seen nor touched so who am I to say the connection string doesn’t work at all…)
The default ConnectionString as defined in the MP looks like this: Provider=SQLOLEDB;$Target/Property[Type="WSS!Microsoft.SharePoint.SPDatabase"]/LegacyConnectionString$. (Without the . (dot) at the end!)
In order to make it work it should be adjusted to something like this:
Provider=SQLOLEDB;Data Source=<SQL SERVER NAME>;Initial Catalog=<SHAREPOINT DATABASE NAME>;Integrated Security=SSPI;Enlist=False;Connect Timeout=15
Suppose the SQL server hosting the SharePoint databases is SQL01 and the SharePoint database is sp_content_search01. The new ConnectionString should look like this:
Provider=SQLOLEDB;Data Source=SQL01;Initial Catalog=sp_content_search01;Integrated Security=SSPI;Enlist=False;Connect Timeout=15
Create per SharePoint database an override, targeted at the same SharePoint database and save those overrides in a dedicated unsealed MP like Overrides SharePoint.
Also test this approach on ONE SharePoint database only and lower during the testing phase the interval (IntervalSeconds) of this Monitor to 5 minutes for instance (since I had a dedicated test environment I lowered the interval to 1 minute, allowing me to run the tests extra fast).
This way you’ll know for sure the modified ConnectionString works for your environment and puts those SharePoint databases in a monitored AND healthy state.
When you have found the correct ConnectionString all you need to do is to adjust them per SharePoint database, remove the override on the IntervalSeconds for the first modification so that Monitor runs once per 30 minutes as well.
Monday, April 22, 2013
Failed to create process due to error '0x8007010b : The directory name is invalid.', this workflow will be unloaded.
First I started to look everywhere in order to get it solved. But I turned up empty handed, because:
- The directory name wasn’t invalid;
- The file (microsoft.sharepoint.foundation.library.mp.config) was present in the same directory AND accessible;
- The file could be opened/processed by any other program, PowerShell included;
- The working directory was consistent and without any errors.
So whenever you experience the same issue and the Configure SharePoint Management Pack Task throws the error Failed to create process due to error '0x8007010b : The directory name is invalid.', this workflow will be unloaded, simply close the OpsMgr Console, restart it and run the Task again.
Changes are the Task runs just fine now. I have seen this behavior many times in different SCOM Management Groups…
Friday, April 19, 2013
Which was a bit frustrating. So time to investigate. And – when possible – to solve the issue.
When I looked in the OpsMgr eventlog of those SharePoint servers, EventID 0, Source Operations Manager was logged with this description: ‘Cannot identify which SharePoint farm this server is associated with. Check the management pack guide for troubleshooting information.’
So apparently the Discoveries were fired on the SharePoint 2013 Servers, but they didn’t get very far. No Discovery = No Monitoring. Period. Time to find out why the Discoveries failed.
Soon I found this posting of much respected SCOM Guru Tim McFadden. Even though this posting is about the SAME issue with SharePoint 2010, it has many resemblances. However, there is ONE crucial difference.
In the SharePoint 2010 case the culprit was Windows Management Framework 3.0. Which isn’t really required for SharePoint 2010. So a simple removal as described in the same posting by Tim solves the issue and makes it work.
But when looking at the requirements for SharePoint Server 2013 there is the REQUIREMENT for Windows Management Framework 3.0:
So removing it won’t do. Even worse, it will most certainly break some of the functionality of SharePoint Server 2013! So that’s not the way to go!!!
None the less, the behavior is exactly the same. When trying to run SharePoint 2013 Management Shell, this error is what I got (also the same error for the account being used by the SharePoint 2013 MP): The local farm is not accessible. Cmdlets with FeatureDependencyId are not registered.
How it was solved
It even got more sillier. When I went to speak with one of the super system administrators about this issue, he logged on to one of the SharePoint Server 2013 boxes, and started the SharePoint 2013 Management Shell without any issues at all! No error message! It just started!
Even though it looked like erratic behavior it showed Windows Management Framework 3.0 wasn’t the culprit here. Otherwise this system administrator would have seen the same error. So something else was at play here.
And when found, it would solve this issue. Comparing the properties of the SharePoint 2013 Management Shell between his account and mine, didn’t show any differences. So adding the switch –v2 (or –version 2 for that matter) as stated in many postings on the internet (like this one), won’t do. Also it will generate an error since SharePoint 2013 requires a higher version of PowerShell. Downgrading it will result in another failure…
Also the path variables were checked. And they matched as well. So somehow somewhere I wasn’t allowed to load that SharePoint Server 2013 PS extension. Time to zoom in on that.
Now I found this posting of Koen Vosters, all about the issues I bumped into. As he describes in the same posting: ‘…You have someone who has the rights to run powershell add your user as SP Shell Admin…’. In this case the system administrator who runs the SharePoint 2013 Management Shell without any issues at all.
Yeah! It works!!! The same for the SCOM account used by the SharePoint Server 2013 MP! Awesome!
Afterwards all the views in this MP got populated WITH a health status! Awesome!
Thursday, April 18, 2013
Finally with RUP#2 for OM12 SP1 this is solved:
While attending MMS 20123 I heard some whispers going around that the team at Microsoft responsible for the Widgets is pushing hard to solve other knows issues as well.
I know some of these people personally and I must say they work hard and are dedicated to the job. However, like in many other companies, the agenda’s, budget- and resource allocations don’t always match with other priorities.
But now it looks like it these are all aligned now. So there is more to come! Stay tuned.
Thursday, April 11, 2013
- BOF01 - Cloud and Datacenter Management Roundtable;
- BOF10 - Operations Manager Best Practices.
For BOF10 Eric Olmstead was the speaker.
However, Eric was a bit surprised since he only proposed this session, and had no intention to become the speaker. So he was glad to hand that role over to Joseph Chan (Principal Lead Program Manager for Microsoft) who accepted me as co-presenter (thank you Chan!).
What BOF is all about
BOF sessions have an open format. The slide deck (if any) consists out of three slides at the most: the title slide, the slide showing some additional information and the end slide about providing the evaluations for this session.
With a BOF the audience can ask any question related to the topic of the BOF session. And any one present in that BOF can answer it, which makes a BOF always a bit exciting since one never knows what questions are going to pop up.
So a BOF is dynamic and highly interactive. The speakers become more like presenters and have to manage the discussions, questions and answers.
Therefore a BOF represents the community in a very close manner. Anyone can chime in and share their personal experience and knowledge.
Who attended one or both BOF sessions?
For both sessions many experienced people were present like (but not limited to) Bob Cornelissen, Kevin Greene, Damian Flyn, Pete Zerger, John Joyner, Maarten Goet, Walter Eikenboom, Gordon McKenna, Arie Haan and Raymond Chou.
Microsoft was represented as well by people like (but not limited to) Daniele Muscetta, Joseph Chan, Satya Vel, Eugene Bykov and Vlad Joanovic.
On top of it all, the audience itself. All people working with one or more System Center components on a daily basis, which results in good experience and knowledge.
So a lot of brain power was present during both BOF sessions creating the ideal environment to answer even te hardest questions.
Some of the questions asked
These are some of the questions which were asked during these BOF sessions:
- SCOM 2007 R2: Upgrade or not?
Even though Microsoft provides the mechanisms to run an inplace upgrade, experiences out of the field (both by Cameron and me) aren’t always that good. Already a couple of times an upgrade went bad resulting in a total restore of the environment. Also many times SCOM 2007 R2 environments carry around a lot of legacy, coming from SCOM 2007 RTM > SCOM 2007 SP1 > SCOM 2007 R2 with a lot of legacy MPs, connectors and the lot. Also many times Windows Server 2003 and SQL Server 2005 are being used, both NOT supported by SCOM 2012. Therefore it’s better to start fresh and opt for the alongside scenario where a brand new SCOM 2012 environment is build and step by step taking over the functionality of the SCOM 2007 R environment.
- How about Agentless Exception Monitoring (AEM)?
Additional question: What kind of load does it create on the Management Server (MS) which becomes the AEM endpoint (collector)?
Even though Microsoft provides good guidelines, it’s hard to say per situation how many AEM clients a single MS can handle. It’s also good to know that when the AEM clients don’t run a SCOM Agent, they’ll show up as unmanaged computers in the SCOM Console. When having many AEM clients which are unmanaged it’s even better to run a dedicated SCOM Management Group for this purpose. Also the MS becoming the AEM Collector must use disks allowing high IO in order for the Read/Write actions. The better the disks, the more AEM clients a single AEM Collector can handle.
- The Dell MP an UNIX/Linux servers don’t work well together. Why and how to work around it?
The Dell MP is targeted at Windows Servers, or more specific the Windows based Dell management client running on that server. So the Dell MP will only discover Dell servers running Windows Server OS with a SCOM Agent in place. UNIX/Linux servers won’t be discovered. There are some options to work around it like using SNMP and querying the MIB table for Dell specific OID’s and monitor those. Another approach is to replace them with Windows Server (this one was suggested by a Microsoft PM).
- Audit Collection Services (ACS) and the future
ACS has gotten an update when SCOM 2012 became RTM, targeted at new security features present in Windows Server 2012. This question was answered by a Microsoft PM and somehow I got the feeling something in the near future will be communicated about ACS. None the less, ACS is here to stay and won’t be scrapped.
- When monitoring UNIX/Linux servers a MS server can’t take the load as described by Microsoft.
Additional questions: Why? And: How come a MS monitoring UNIX/Linux systems is far more busier compared to a MS monitoring Windows servers?
First and foremost there are huge differences between a SCOM Agent for a Windows Server and a SCOM ‘Agent’ for a UNIX\Linux server. The latter is more a web service. Also a SCOM Agent for a Windows Server is an independent entity. It downloads the MPs, processes them, performs all the related scripts as required, collect the results and send them back to the MS. The UNIX/Linux ‘Agent’ however is light weight and is being controlled by the MS. So the MS processes the MPs targeted at the UNIX/Linux servers and sends ‘orders’ to the UNIX/Linux ‘Agent’ who processes it and sends the results back to the MS.
The answer to the Why question is harder to answer. How are the MS servers provisioned? What kind of disks do they use? When these are plain virtual disks, there might be a performance issue. Also where do all the MS servers and the related SQL servers reside? In the same LAN segmented with very low latency? Or are they separated by a WAN connection (not advised, ever!). Besides that, what is being monitored on those UNIX/Linux servers? Multiple instances and many UNIX/Linux deamons? Because that will increase the load on the MS involved. Also good to know is when the behavior (overloaded MS) started to happen? Perhaps a MP creates too much weight on the environment. As you can see many things might be happening here.
- Can I use an already present SQL server for SCOM?
That depends. Like how many items the SCOM environment is going to monitor, what kind of usage of the SCOM Reporting is to be expected, what kind of SQL server are we talking about, what specs does it have, what kind of IO, RAM and CPU does it have to offer and what are the SQL collation settings. When having answered all these questions it can be decided what to do. Whether to use the existing SQL server or to provision one or even more new ones or to install a new dedicated SQL instance on the existing SQL server.
Some rules of thumb are: when SCOM Reporting is going to be used heavily together with customized Reports it’s better to separate the Data Warehouse from the OpsMgr database. Also when many network devices are going to be monitored, the Temp database must get it’s own disk. When the SCOM MG is going to monitor many servers and network devices, it’s better to use dedicated SQL servers. Also the SQL collation setting is very important. When the existing SQL server runs another SQL collation, a new SQL instance on that server for SCOM must be installed.
Sometimes the existing SQL server is a ‘beast’: having many CPU’s, packed with lots of RAM and running super fast disks. And when that server has enough power left to host the SCOM related databases, it’s relatively safe to use that SQL server. In all other conditions it’s better to use dedicated SQL servers.
- How do I size my SCOM environment?
For this Microsoft released the SCOM 2012 Sizing Helper, an Excel sheet with some macro’s running in the background. This sheet helps you to size your SCOM environment properly.
- How to go about Resource Groups?
Additional questions/remarks: Resource Groups are a new feature in SCOM 2012, taking over the RMS functionality and spreading them over all MS servers present in a MG. But sometimes Resource Groups can have unforeseen behavior or even (very seldom!) result in a broken MG. How to scale Resource Groups? How to go about it?
Microsoft is aware of it and working hard to release new updated documentation about these topics on Resource Groups. Soon this document will be made available to the public.
There were many more questions asked but these are the one’s which stood out for me personally.
Both BOF sessions were great and I love the open format where everyone can ask good questions and chime in for answering it. Awesome and thank you all for being there and making these BOF sessions a great success.
During this session Anders and Pete openly shared their own experience gained in the field, enabling the audience to avoid many of the potential pitfalls they themselves stumbled upon.
Some of the takeaways of this session:
- Start small and simple. SkyNet wasn’t build in one day.
(Learn yourself to use Orchestrator by building small and simple Runbooks and go from there);
- Use the Runbook Automation Reality Funnel in order to decide to automate something or not:
(Picture taken from the slide deck of this session.)
Additional explanation: Only automate those items which are worth to be automated and worth the effort. For example when it takes a week to build a Run Book automating a task which takes 10 minutes to perform manually and only happens twice a month, it will take too much time for the investment to pay back when automating it;
- Automating garbage is still garbage. The only ‘advantage’ with Orchestrator is it will be repeating your mistakes endlessly and way much faster;
- DDD is the starting point not the end station (Drag > Drop > Done).
Basically meaning: Get the workflow in the design area so it seems to work and go from there.
- After DDD, add controls, checks and fault tolerance;
- When a control or a check returns an error, the workflow has to clean up everything it has done so far so no garbage is left;
- Check yourself before you WRECK yourself!
(a Runbook might return a success status and yet still fail);
- Don't put too many activities in one RB. It's better to divide them in other RBs and invoke them. It makes the Runbook easier to understand, maintain and the part RBs to reuse in other RBs;
(Picture taken from the slide deck of this session.)
- When building a Runbook, start on the left side of the screen and work to the right. Don’t go up or down since it will make it harder to understand the workflow;
- Use colors for the links:
Green for success, Red for warning/critical and orange for ‘could be’ situations;
- Use a naming convention for your Runbooks. So when something goes wrong it’s relatively easy to troubleshoot by pinpointing the culprit;
- Exporting a Runbook? Use a ‘Proxy’ Orchestrator environment!
When a Runbook needs to be exported to another Orchestrator environment it needs to be cleaned so it doesn’t contain references anymore to the environment it came from. The way to go about it is to build a virtualized one server Orchestrator environment which is totally blank. Snapshot it before importing the Runbook. When the Runbook is imported, clean it step by step. Export it so you have a clean Runbook. And then revert to the snapshot so the Orchestrator environment is totally blank again.
So I learned a lot from this session and know for sure all my Runbooks will be different from now on. Really a good session it was. One thing worries me though. Anders kept on mentioning something about hamsters but the real clue eluded me. Perhaps Anders can enlighten me .
Anyone running Orchestrator should see the video recording of this session since there is so much to learn from it. It will help you to make your Runbooks better and help to avoid many of the pitfalls (which I bumped into myself as well).
Channel 9 has all the video recordings of the MMS 2013 sessions. And the good news is, this session can already be viewed online or even downloaded.
Wednesday, April 10, 2013
So the next day I had to upgrade the Veeam MP from version 5.7 to version 6.0, even though it wasn’t even 24 hours in place .
The upgrade went smooth and no issues at all. Except for the license file. The license file is targeted at version 5.x and isn’t accepted by version 6.0. But after filing a case on the website of Veeam it was solved pretty soon.
When you want to know the differences between the latest version of the MP go here and you’ll get all the information you need. Much has changed!
Changes which really jump out are:
- The Veeam MP consists now out of a whole set of MPs which must be imported;
- The name nWorks is dropped, replaced by the name Veeam;
- The configuration of the VIC is fully automated now, so no more need to kick of a Task manually;
- The name of the components are changed;
- The look & feel of the UI is totally revamped;
- New reports are added;
- Capacity Management is extended.
Even though many things have changed one thing remains the same: the high level of quality. Not only for the build but also for the documentation. Veeam has raised the bar even more.
When I am back in the Netherlands I will blog more about this latest version of this MP.
Want to know more about UR#5 for System Center 2012 RTM? Go here.
When running System Center 2012 Service Pack 1, UR#2 is released since yesterday. Go here.
What is does? Taken directly from the download page: “…enables Operations Manager users to receive ongoing assessment of configuration for their windows computers. After installing the connector and selecting computers to be analyzed, Operations Managers users will be able to receive Advisor alerts in the Operations Manager console…”
This connector requires OM12 SP1 WITH UR#2 installed. UR#2 became available yesterday.
The preview of this connector can be downloaded from here.
KB2802159 tells it all.
For OM12 SP1 it contains these fixes/updates (KB2826664):
For OM12 SP1 UNIX/Linux monitoring it contains these fixes/updates (KB2828653):
RTFM is key here. So prepare yourself before rolling out this UR.
Please note that recordings are available 24 to 48 hours after the sessions took place.
Tuesday, April 9, 2013
During this session I felt a positive vibe which makes me believe to have witnessed the rebirth of OM. Why you ask? Let me clarify.
Even though dashboarding is hugely improved in OM12 compared to SCOM 2007 (R2) there are still issues. These issues are recognized by Microsoft and they’re putting effort, time, resources and budget into it in order to address these issues. In the next Rollup Update these fixes will be present!
At this moment the Azure MP misses out a lot of functionality compared to what Azure has to offer today. Microsoft knows that and will publish the updated Azure MP this week. Expected time of arrival is Thursday this week. During the session this MP was partly demonstrated.
- Support for multiple subscriptions;
- Improved wizards;
- Discovery and monitoring of many Azure components;
- Azure storage monitoring;
- Comprehensive diagrams.
System Center 2012 Console mess
Today the System Center 2012 components are using different Consoles, UI and web based. It has become a ‘console jungle’ which makes it hard for companies, having deployed multiple System Center 2012 components, to maintain. Also the decision to introduce SilverLight™ for the web based consoles is a bad call. Simply because SilverLight™ doesn’t run on many other web browsers.
In todays world people use a myriad of devices to display data. The System Center consoles of tomorrow should be able to work on those devices, no matter what vendor, OS or browser.
Microsoft is aware of this and working hard to make it right. But this is something not to be taken lightly and requires a huge effort, budget- and resource allocation. It will take some time but I hope for the best.
Management Pack quality
Finally there is a new person back on the helm to drive the quality of the MPs delivered by Microsoft. For now some MPs are really good where as other are just plain awful and bad news for the overall SCOM experience.
Daniel Savage is the person on the helm now and is open to comments on any MP delivered by Microsoft. Back in 2008 I met Daniel for the first time (TechEd Barcelona) and came to know him as a person with a huge drive. If there is anyone capable of bringing the Microsoft MPs to a new level of overall quality, it’s him.
For what’s worth it he has my full support and soon I’ll provide him with feedback on the pro’s and con’s of todays MPs delivered by Microsoft.
As you can see it was a good session with good news as well. And yes, not everything discussed is available today but it’s in the heart and minds of the PMs involved AND on their agenda. So this gives me a huge positive vibe!
Before I knew it, I found myself at the Veeam booth where a big crowd was gathered. Soon I saw the reason why: the book System Center 2012 – Operations Manager UNLEASHED, was given away for FREE! In total 250 copies were handed out in the matter of minutes!
Now that’s a COOL and AWESOME swag! And very generous as well. Cameron Fuller was there for signing those books and before I knew it, I was asked to sign the books as well. Later on Kerrie Meyler joined us so the three of us were signing these books. It felt special to be asked by so many people to sign their book. Awesome!
I talked with many people that day and went on a ‘swag hunt’ with Kevin Greene from Ireland. A good guy he is and we had a lot of fun. One of the sessions I attended that day and really stood out was the session Hacking The Data Warehouse.
This session was presented by Veeam: Alec King and the Veeam OM Reporting Guru, Oleg Kapustin. No marketing mumbo jumbo nor sales pitch. But everything about OM Reporting and how to get the most out of the Data Warehouse.
It was a very good session with a level rating starting at 300 and ending at 500+. Even though Oleg said many times ‘…it isn’t rocket science at all…’ the audience, myself included were deeply impressed and doing their utmost best to keep up with him. And yes, he most certainly is the OM Reporting Guru. A title well deserved.
In the audience Stefan Stranger was present as well. I know him personally and he’s famous for his PowerShell knowledge. Besides that he’s a good SQL guru as well so he could have presented that session as well, at the same level. Was good to be in a room with so many people with so much knowledge and experience.
During this session Alec and Oleg explained why Veeam built their GRL, Veeam Extended Generic Report Library. Simply because the GRL default present in SCOM doesn’t deliver the quality Veeam requires for their reports. (These words are mine, not Veeam’s!).
Since Veeam is dedicated to the community they have made their GRL generally available, for FREE. The GRL they demonstrated during this session is the latest one, to be found in the Veeam MP for monitoring VMware, version 6.0.
For now the previous version is available for download and soon the newest version will be replacing the older version. And again it will be available for FREE. And yes, GRL works in SCOM no matter what MPs you have in place. So it doesn’t require the Veeam MP to function for instance.
Back to the OM Reporting Guru
Oleg demonstrated how to build a MP containing special customized Reports on the fly by simply using three command-line based tools, also home grown by Veeam. This was amazing! The first command line tool generated an Excel sheet. After having edited some entries in this sheet, another command line tool was run and the MP, containing the required Reports MP was build in the matter of seconds.
And yes, these command-line tools are also available for download. Not to be found on the Veeam website but on the blog run by Oleg himself: Oleg Kapustin's Sandbox: http://ok-sandbox.com.
This was a very good session and would love to see more of those!
Wednesday, April 3, 2013
For OM12 RTM the Excel sheet OperationsManager2012_ExtendedMonitoringForNetworkDevices.xlsx shows 834 network devices in total which are supported for extended network monitoring.
When SP1 for OM12 became GA it wasn’t very clear whether extended network monitoring was still the same or additional devices were added. For some weeks a new Excel sheet is available for OM12 SP1 extended network monitoring, SystemCenter2012_SP1_OperationsManager_ExtendedMonitoringForNetworkDevices.xlsx.
This Excel sheet shows an additional 302 network devices supported for extended network monitoring. So OM12 SP1 supports 834 + 302 = 1136 network devices for extended network monitoring.
Both Excel sheets can be downloaded from here.
And as Microsoft states on the same webpage: ‘…For a complete list of devices covered with System Center 2012 Service Pack 1 (SP1), Operations Manager, you will need to download BOTH documents…’
As Cameron states on the blog of Catapult Systems (the company he works for):
This BOF session is for System Center SME’s, the Cloud and Datacenter MVP’s and product team to roundtable discuss all items in the Cloud and Datacenter management space (basically everything other than Configuration Manager). If you have a question about System Center (non-Configuration Manager) this is the BOF session for you. It’s scheduled to occur on Wednesday April 10th at 6:00 pm – 7:00 pm Pacific time.
Further information on this is below:
If you are going to MMS and are interested in non-Configuration Manager items, check out this roundtable! If you are reading this and you are an MVP or member of the product team for System Center – please see if you can be there for what should be a very interesting discussion!
I am going to attend that session for sure and – when allowed – will assist Cameron as well. See you all there!
The total amount of the postings isn’t high but personally I prefer QUALITY over quantity. And the articles on this blog are really good stuff. The blog is owned by Tao Yang, a system engineer based in Melbourne, Australia:
Want to know more? Go here.
All credits go to Tayo Yang. Thanks for sharing and keep up the good work! Blogs like these really make a difference.
This poster can be downloaded from here and for the lucky ones visiting MMS 2013, a printed version can be collected as well!
As Microsoft states:’
This poster for VMM in System Center 2012 SP1 can help you:
As Eric posts on the blog:
’…Last year, I was talking with the folks at Altaro about putting together a test cluster. Working together, we came up with a surprisingly inexpensive configuration and upon acquiring all the items, I documented the steps to assemble them and composed an eBook that shows you how to do the same…’
The free e-book can be downloaded from here.
I have seen this issue on multiple locations: The customer has imported the latest version of the Windows Server OS MP (aka Base OS MP) version 6.0.6989.0 and after some time these Reports (found under Reports > Windows Server Operating System Reports) Performance By System and Performance By Utilization fail to show the data related to the Processor performance, like this for instance:
First of all the good news. When your Reports did show data related to the Processor before you updated the Base OS MP to version 6.0.6989.0 changes are your SCOM R2/OM12 environment isn’t having issues at all. So before the Base OS MP was updated the Performance By System Report looked like this (based on the Base OS MP version 6.0.6958.0):
With the latest version of the Base OS MP, Microsoft has changed some Rules for Performance Collection, among them the Rule which collects the Total Percentage Processor Time. Kevin Holman already blogged about it: ‘…Several monitoring workflows were change from Processor, to “Processor Information” perf object. This change was made because a new perf counter/object (Processor Information) was added to the OS to support more than 64 logical processors. The old perf counter object (Processor) was limited to 64 CPU’s. As physical hardware is starting to ship 6+ core systems, with HT, and multiple sockets, this was a problem for measuring utilization for VERY large boxes…’
And he continues to explain it further: ‘…NOTE: This might BREAK your existing reports and dashboard views that are expecting “Processor” object, as we no longer collect that.…. so be prepared to make some changes there…’
And guess what? The two mentioned Reports aren’t adjusted to reflect this change. So the Reports want to show performance data which isn’t longer collected.
As you can see, the Object is changed to Processor Information with the same counter (% Processor Time). On the other hand, the Report itself is still using the ‘old’ Performance Collection Rule which isn’t used anymore, thus resulting in partially empty Reports .
There are two solutions: either changing the underlying code of the Reports involved and upload the modified Reports – under a different name – to the SSRS instance being used by SCOM R2/OM12.
The other solution is faster and requires less ‘magic’: rebuilding the old Perfomance Collection Rule and disable the new one. After a couple of days the earlier mentioned Reports will start showing data again. In this posting I’ll give you a quick explanation how to do this.
Creating a new Performance Collection Rule & Disabling the old one
When you have ‘beasts’ of Servers in place where many CPUs are present, it’s better to disable the Performance Collection Rule Processor % Processor Time 2008 using a Group which is dynamically populated with all Windows Servers which has those ‘beasts’ as excluded members. This way this Performance Collection Rule will run against those Windows Servers and will be disabled against all other Windows Servers.
Also it’s a good idea to put these modifications in a dedicated MP. Simply because I expect Microsoft to repair this glitch in a next version of the Base OS MP. When you put all these modifications in a special dedicated MP you only have to delete it and be done with it. Otherwise you have to go through it step by step which is a more time consuming process…
- Open the SCOMR2/OM12 Console with an account which has sufficient permissions to create/modify Rules;
- Go to Authoring > Authoring > Management Pack Objects;
- Hit ‘Change Scope’ and type Windows Server 2008 Computer;
- Select the Rule Processor % Processor Time 2008 and disable it through an override (put it in a dedicated MP like Temporary MP for Processor Performance);
- Right click Type: Windows Server 2008 Computer and select Create a new Rule;
- As stated before put this Rule in a dedicated MP (example: Temporary MP for Processor Performance);
- Select the correct type of Rule to create (Collection Rules > Performance Based > Windows Performance);
- Give it a proper name, like Processor % Processor Time 2008 – TEMPORARY and a proper description:
Because of Step 4 and 5 the Rule Category and Rule Target are set correctly. Double check them though. > Next;
- In this screen, hit the Select button. Select a server which is running Windows Server 2008 R2 and select the correct items according the screen dump. Hit the OK button and the fields Object (Processor), Counter (% Processor Time) and Instance (_Total) will be filled with the correct information.
Set the interval to 5 minutes > Next
- Place a checkmark for Use Optimization, select the option Absolute number and set it to 5;
> Create. The temporary new Performance Collection Rule will be created now.
Within a few days the earlier mentioned Reports will show data again for the processor object .
Tuesday, April 2, 2013
First of all my apologies to all non-Dutch readers of this blog. A part of this posting will be in Dutch since it’s mainly targeted at the Dutch audience. It’s is all about a new User Group in the Netherlands, Windows Management User Group Nederland (WMUG).
This posting will be the first AND last posting in Dutch. All other future postings on my blog will be written in English.
WMUG is founded by Bob Cornelissen, Peter Daalmans, Kenneth van Surksum and myself. WMUG is aimed at everything involved with managing IT environments based on Microsoft technology. WMUG will serve as a platform for everyone involved with Microsoft technology. It also creates the possibility not only to attend sessions but also to present sessions on topics people excel in. So WMUG is all about the community and for the community.
Besides the Microsoft technology there is also room for other mainstream technologies like VMware and Citrix to name a few.
Nieuwe gebruikersgroep ziet het levenslicht
Op 22 mei zal de eerste bijeenkomst van de Windows Management User Group Nederland (WMUG NL) plaatsvinden. De WMUG NL zal zich richten op alles wat met het beheren van Microsoft Windows te maken heeft en wil een platform bieden voor iedereen die zich met het management van Microsoft Windows bezig houdt. Of je nu kennis wilt delen door evenementen bij te wonen of tijdens een evenement een presentatie wil houden, de WMUG NL biedt het platform.
Wat kan je van de WMUG NL verwachten?
Periodiek zal de WMUG NL bijeenkomsten in Nederland organiseren, tijdens deze bijeenkomsten zal het laatste nieuws en specifieke kennis over een product met de aanwezigen gedeeld worden. Natuurlijk zijn de avonden interactief zodat u uw ervaring ook met de aanwezigen kunt delen. Tevens zal de WMUG NL webcasts en een Nederlandstalig forum hosten.
Over wat voor producten wordt gesproken?
De focus van de WMUG NL beperkt zich niet alleen tot Microsoft producten zoals de System Center 2012 suite, de Windows familie en de virtualisatie oplossingen van Microsoft. Wij willen juist ook de combinatie maken met andere producten van bijvoorbeeld VMware en Citrix, denk dan aan VMware ESX of Citrix XenServer als virtualisatie platform ten opzichte van Hyper-V.
Wie of wat is de Windows Management User Group?
WMUG NL ben jij! Opgericht door een viertal actieve community leden, wil je echter meehelpen om dit nieuwe platform tot een succes te maken neem dan gerust contact met ons op, je bent van harte welkom! Want zoals gezegd, de community is er voor iedereen.
Hopelijk zien we je tijdens ons eerste evenement, je kunt je hier inschrijven. Wees er snel bij want de beschikbare plaatsen zijn beperkt. Meer informatie over ons eerste evenement op 22 mei vindt u hier.
Neem contact op via email@example.com