Many issues that our users run into are related to credentials and security. Each environment is a bit different and PDQ Inventory and PDQ Deploy Pro try to be as flexible as possible in order to accomodate these differences. But, sometimes issues do crop up and it's helpful to undestand how the applications work with credentials in order to see where a problem might crop up and what to do to get around it.
Each application has slightly different needs, so I'll go over each of them separately and highlight the similarities as well as the differences.
I'll start with PDQ Inventory as it is a bit simpler than PDQ Deploy. As you can see in the graphic below, there are a number of components in play.
(click for more information on each item)
Most issues come up regarding the Service User. This user needs to have the Log on as a service privilege which isn't given to any users by default (including administrators). PDQ Inventory will attempt to grant the privilge to the user and while this will usually work it is possible that security policies in place may prevent it from happening. Without this privilege the service won't start with a logon error. Unfortunately this logon error can be caused by other things so it may take some trial and error to find out for sure what the root cause is.
We have also seen some environments where this privilege will get revoked periodically and it's not always easy to know that it has happened. In those cases simply re-applying the server user (using the the Server Service panel in Preferences) will correct the error but it may crop up again.
PDQ Deploy Pro
PDQ Deploy Pro is very similar to PDQ Inventory when it comes to credentials, the main difference being how the remote service is run and it can be a big difference.
(click for more information on each item)
Unlike PDQ Inventory, PDQ Deploy Pro may require a user other than Local System for its remote service. One reason for this is because a deployment requires access to network shares through either a script or using Pull File Copy. In these situations a user with network access is needed. It is possible to grant network access to the Local System account (when using Active Directory) but it's not very commonly done and it's usually easier to just use a domain user account. Any issue that affects using a user for the PDQ Deploy Pro Service applies when using the a user for the Remote Service, such as the Log on as a service privilege.
Another reason that a user account is needed may have to do with the nature of the installer itself. Some installers don't work properly using Local System and some installers don't work properly without it. One thing we've learned over the years is that installers can be very weird. It's not very often that an installer developer gives much thought to installing remotely and so strange behavior can crop up unexpectedly when running from within a service or without a user interface. Most of the time it's only possible to discover these issues with trial and error.
PDQ Deploy Pro can run installers as either Local System or as the Deployment User. As of version 1.4 this is a global setting, but the next version will allow this to be set for each deployment individually to allow for a mix of credential types to be used.
I won't say much about PDQ Deploy as it will soon be merged with PDQ Deploy Pro and will use an identical mechanism for credentials. There are only two differences between PDQ Deploy and PDQ Deploy Pro that need mention.
- PDQ Deploy runs everything from within the Console, so there is no service and no service user to worry about.
- When using the "Current User" authentication option for a deployment, the remote service will run as Local System and when using "Other User" then the remote service will run as that user.
The bottom line is that security can be a really tricky issue in any network environment, but hopefully understanding how the pieces fit together will make it easier to find out what's going wrong and where to fix it.
As always, we're looking for real world experiences and how we can improve our products to better help you do your job. Please let us know if you have a security related issue with our products and we'll do our best to iron out the problem or at least get our products help you solve it and not get in the way.
Photo by Keith Williamson
I was reading with interest this overview of the "Cyberwar Panel" at RSA 2011 and ran across this sentence about regulating IT security:
"Regulate results, not technology." Schneier said. "If you regulate technology, you stifle innovation. If you regulate results, you incent innovation."
I got to thinking what that could mean. It's pretty obvious how to regulate results when you have a specific measurable goal such as smog reduction or crash safety standards. But how do you regulate results when the ideal result is "nothing happening?" It seems to me that there isn't a meaningful way to regulate without regulating technology (at least to some level) and the rate of change in the world of computing and networking is just too high to allow that to work.
Then there was talk of modelling security regulations on Sarbanes-Oxley:
Chertoff called for a regulatory framework where company executives and board members sign on the dotted line, certifying what steps they have taken to secure their network, what backup systems they have in place and what level of resiliency is built into their IT system. “People take that seriously. Is it dramatic? No, but it moves the ball down the field,” Chertoff said.
Schneier concurred, noting that holding individuals at a company accountable for certain protections has worked with environmental regulations and Sarbanes-Oxley, the post-Enron law that requires directors and executives to certify their financial results.
Sarbanes-Oxley has certainly worked if the goal was to prevent companies from going public in the US. I don't really think this is any kind of model to base regulations off of.
I personally think that the main reason we haven't had a "Cyber Pearl Harbor" is due in large part to the absence of regulations. The rate of unfettered innovation has meant that the security environment is so diverse that it's not possible to lauch such an attack at a single point. The Stuxnet worm is a good example. Not only is it probably the most advanced and complicated attack yet devised (and possibly with the resources a nation state behind it) but it also got nowhere near the destructive power that's got regulators worried.
I'm not foolish enough to think that a "Cyber Peral Harbor" is impossible, but I also think that if such a thing is possible it's going to be happening in a way that no foreward thinking (short of divine prophecy) is going to prevent. As soon as regulations come into play then technology will begin to homegenize and a single point of failure will become more obvious.
But I'm not a security expert, just some idiot with a keyboard, I'd love to hear what your thoughts are on the topic.
Photo by Northampton Museum
PC Pro has an interesting (if somewhat frightening) article on the "10 most calamitous computer cock-ups." It got me to thinking about all of the times I've screwed things up either with code I've written or by some administration task I've performed. I dare not count them for fear that the number may be bigger than I think (and I can think of a pretty big number!)
What makes these 10 stand out is not so much the mistakes themselves, but the scale of their effect. They are particularly calamitous not because the scope of the error was so large but because the reach of the technology in question. In our own environments, our mistakes are not less calamitous with their own scopes. Accidentally deleting a bunch of user accounts or erasing backup files won't quite effect the same number of people as the blowing up of a gas pipeline, but try telling that to your users when they can't do their work.
Perhaps progress isn't so much a measure of success, but the size of the potential problems. If you are at the point where a simple mistake can cost thousands of hours of productivity, then you've progressed through a series of smaller tragedies to get there. I don't think I'd want anyone in a position to cause a lot of damage unless they've learned how to deal with causing a lot of damage. And learned to be sufficiently scared of pressing the "Submit" button. Not so scared that they don't dare to push it, but scared enough to make it clear that they understand what pushing it may mean. To painfully twist a famous phrase: If you aren't terrified of making a change to production, then you don't fully understand what production is.
Photo by Robert Gaal
I've been working on on some code recently that deals with encryption and it always gives me the hiblijiblies. It's the kind of thing where you feel really secure right up to the point that you aren't secure any more. The biggest area of my worry is securing keys in software. In order for software to be able to encrypt and decrypt things on its own (i.e. without human intervention such as entering a password) the keys need to be in a place that the software can read them. And if the software can read them, then someone else is going to be able to. It can be obscured and hidden, but it can never be as secure as having a password known only in the mind of a user.
There are a couple of classic examples of this problem out there:
CSS or Content Scramble System is the encryption used on DVDs to prevent them being copied and played in unapproved geographical regions. It wasn't the strongest encryption in the world (it was 1995, afterall) but it wasn't until 1999 that someone broke it. Ostensibly to create a DVD player for Linux which didn't have an officially licensed player. Supposedly the initial project worked by disassembling a licensed software player to retrieve its keys.
AACS or Advanced Access Content System is the encryption used for HD-DVD and Blu-Ray in the same way as CSS. In an attempt to be better than CSS it adds a key revocation system. This allows content producers to create disks that won't work with known compromised keys thereby limiting the damage of an exposed key. It does seem to have worked somewhat better than CSS, but successful attacks against the system are out there.
Trusted Computing is an attempt to block the hole up for good. It's a set of hardware solutions which allow software to store and retrieve keys securely. Certainly better than software only, even it isn't immune to attacks.
For me there are three lessons from all this.
First, everything is a trade-off. You can add extra locks to your front door but that just makes it less convenient when you come and go. Finding the right balance is critical.
Second, diminishing returns are really steep when there is a weak link in the chain. Once you know where your weakest link is, it's much easier to determine when the diminishing returns curve makes any more effort pointless.
And last, it pays to always be paranoid. It's one thing to admit that you've realistically hit the limit of how secure you can make something, and it's another thing to give up entirely. There are always some small improvements that can be made and they should be explored as though the NSA has focused their resources like a laser on you personally, but only implemented if they realistically help out. Being paranoid doesn't mean never admitting that the "bad guys" sometimes win.
In a recent post entitled Security Administration - Tradeoffs I discussed the dangers of making things worse in an attempt to make them better. All changes have a downside, however small, and if you can't see a downside it means you haven't analyzed the change sufficiently.
This was brought into stark contract when I read this post on Eric Raymond's blog. The essentials are that some people are agitating to make AIS ship information encrypted and secure (AIS is information that ships broadcast about their position to other ships and shore receivers for collision avoidance.) The idea is that this information can be exploited by pirates and terrorists. While this is technically true, it doesn't take a very deep analysis to see what the downsides of this would be.
The biggest hole that I can see, and Eric points it out clearly, is that any system would need to have a way so that ships could still read the transmissions of surrounding transponders. What is to prevent this technology getting into the hands of pirates? A technology that by necessity must be installed on thousands of vessels, and access given to any number of individuals of varying degrees of trustworthiness? It doesn't seem like a huge problem for pirates to pose as normal ship operators who need the equipment, or failing that, to take control of a small vessel and torture the captain for his encryption keys. This change might thwart the casual, weekend pirating teens but not professionals.
It's a case seeing a problem (pirates using AIS information to track down targets) and posing a solution (prevent pirates from getting AIS information with encryption) without fully considering the real world weaknesses of encryption technology or the costs, not just in implementing the solution but in the fragility this would introduce to collision avoidance. I'm reminded of password complexity requirements which see a problem (easy to break passwords) and posing a solution (require really complex passwords) without fully considering how those complexity requirements will just cause people to store unmemorable passwords on paper or in files on computers.
Consider the following two passwords I just made up off the top of my head 7sne02mnw6slie$a and .@Flyingfoxes001. Both have the same length and one is very obviously easier to remember without much strain. The second password wouldn't fly in a lot of places, and may even drive a security officer to an early grave. In fact, I'm almost certain that even the first password isn't complex enough for a couple of places that I've worked. So which password is more secure? Let's see what my computer thinks.
Hmmmm, okay. Maybe my computer is stupid. Let me see what GMail has to say.
I know which password I'd rather use, and which I'd rather have my users go with (the password reset cost savings alone would be huge.) Still, some people's guts will tell them that the first password is clearly superior.
Another example. I recently got a new phone and I had to select a 4 digit PIN for customer service. As we should all know, 4 digits provide for 10,000 possible PINs. However there was a requirement that no two digits next to each other could be identical or consecutive, so 1274 is not a valid PIN. That simple requirement actually reduces the number of possible PINs by nearly two-thirds to 3,430. This may still be sufficient, but I'd be willing to bet that whomever created the requirement didn't consider this consequence.
No security system is ever fool-proof, particularly when the fool is the one designing the system.
Photo by Fristle
A fascinating blog post by Keyboard Cowboy details his efforts to expose a flaw in the granting of SSL certificates. The summary of the story is that many CAs (Certificate Authorities) grant SSL certificates to domains by sending an e-mail to an administrator's e-mail account on that domain such as "administrator" or "postmaster." Some companies include "ssladmin" in the list of authoritative addresses and many free web mail services allow that particular address to be registered by anyone, making it possible to get yourself a certificate for the web mail domain.
This particular exploit strikes me as being almost social engineering. It's not, technically, because it doesn't involve interaction with a person but the hole exists because of a lack of communication between the web mail providers and the CAs. Imagine if CAs decided to add another address to the list of usable ones, such as "securityadmin?" Unless all web mail providers were informed, this would just re-open the hole.
It brings to mind the idea that all security systems are tradeoffs. Every system that increases security in one way reduces it in another. SSL certificates are certainly a boon to online security, they make web commerce possible. But the realities of issuing certificates makes it so that not all SSL certificates can be trusted. So that even if you are diligent in looking for the https: and a valid certificate, your trust may be misplaced and you'll be less secure than if you didn't trust any site.
It's important to keep this in mind when designing security solutions and policies. Always try to identify how this new policy will reduce security. Complex password requirements are more secure, but also more likely to be written down. Time consuming door entry procedures increase the likelihood of tail-gaters. Police radios help police to coordinate activities, but scanners let the criminals keep an eye on them. I would submit that if you are looking at a new security policy and haven't been able to identify the ways in which it harms security, then you haven't thought it through enough to implement.
Follow me on Twitter @AdamRuth
Author: Adam Ruth
DCOM, or Distributed Component Object Model, is a technology in Windows allowing remote communication between programs. WMI, in particular, uses it to communicate. A lot of business oriented server applications use it, as well, to communicate between layers. If you've ever spent any time with DCOM you probably have come to understand just how fragile it can be. When it works, it's like magic, but when it doesn't it can be a serious hair pulling experience.
One of the more fragile bits of DCOM is its security. There are are four different areas of DCOM each with their own ACLs (Access Control Lists) and a problem in any one of the four can lead to hard to track down problems. To make matters worse, many applications that use DCOM will alter the security settings, potentially breaking DCOM access for other programs on the same computer. Sometimes it's necessary to just reset DCOM security to its default state, just as it was when Windows was installed.
Last week I found a quick way to do this, but it does require editing the registry so the standard warnings and "do not try this at home" apply. However, if you're stuck fixing a problem down in the guts of DCOM security, editing the registry is the least of your worries.
You can view the DCOM ACLs by running dcomcnfg.exe and navigating to Component Services > Computers > My Computer > Right-click > Properties > COM Security tab.
The ACLs are stored in the registry under the key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Ole, in the following binary values:
To reset them, all you need to do is to delete these values. If DCOM doesn't find any ACLs here, then it will use its defaults. Any changes you make will then re-create the values. Of course, you'll want to back them up before you delete them, or you could just rename them to be safe.
Generally I despise information security measures that are so strict the computer users basically have to resort to using old Selectric typewriters to any work done.
Password enforcement is one area where I roll my eyes a lot. A good, reasonable, password strength policy is appropriate, no question. However, the stricter the policy the greater the probability that the respective computer users will end up writing their hard-to-remember passwords on a sticky note stored under their keyboard and then, of course, the security is worse than allowing certain patterns or longer modified words.
Then I read this article posted on ZDNet.
From the article:
Key findings include:
- In just 110 attempts, a hacker will typically gain access to one new account on every second or a mere 17 minutes to break into 1000 accounts
- About 30% of users chose passwords whose length is equal or below six characters
- Moreover, almost 60% of users chose their passwords from a limited set of alpha-numeric characters
- Nearly 50% of users used names, slang words, dictionary words or trivial passwords (consecutive digits, adjacent keyboard keys, and so on). The most common password among Rockyou.com account owners is “123456”.
One thing I definitely recommend is forcing passwords to be longer than 8 characters. Even HomerSimpson changed to H0m3&s!mP$i is a great password and one that a user who prefers typing HomerSimpson can more easily remember.
If you or someone you know creates password policies so strict I would consider taking a stroll through your users' offices and cubicles and looking under keyboards. I'd also consider easing up on your password policies if you find passwords written down.
Of course I'm not telling you anything you don't already know.
Photo by Brad & Ying
Author: Adam Ruth
Our goal at Admin Arsenal is about helping administrators work remotely by getting them to unplug their sneakernets. System security can sometimes make it difficult to achieve this goal. It would be great to live in a world without the need for security, not least because of its delicious gumdrop houses, use of sparkling rainbow wishes for currency, and ruler Princess Yum-yum-tum-tum of the Rose Scented Flatus. Unfortunately, we don't live in that world (which is a shame because the dollar to rainbow wish exchange rate is at its highest ever.)
Like most things in life there is a trade-off between the security of assets and their availability for use (or administration, as the case may be.) We've found that security issues tend to be the biggest roadblocks in achieving remote administration nirvana. So in that vein, I present the 5 biggest security roadblocks that we encounter regularly.
This one should be obvious. If a computer doesn't allow inbound connections to its remote administration facilities, then it's not going to work. With the advent of the Firewall in Windows XP more and more people are seeing their remote administration break. It's been a while since the Windows Firewall entered the scene so this problem, and its cure, are quite well known now. The simplest fix (short of disabling the firewall) is to enable the remote administration firewall exception
, the file sharing exception (for remote access to the file system) and the ICMP echo exception (for ping.)
(Windows Management Instrumentation) is one of the primary methods of remotely administering computers. It uses DCOM (Distributed Component Object Model) as its method of communication. If the rights for DCOM aren't properly set, then WMI isn't going to work correctly, if at all. WMI, for all its power, seems to be quite fragile or brittle. One of the reasons behind this is its reliance on DCOM which seems to be the target of well meaning but misguided security experts and tools. We see this problem so often that we created a tool called DCOMAcls which allows for the remote setting of DCOM security settings.
Windows security has trouble coping with computers with clocks that are out of sync. This is actually for a good reason, since security tokens use timestamps and expirations to prevent their misuse. Depending on the settings of your authentication scheme (such as Active Directory's implementation of Kerberos) a clock that is out of sync by only a few minutes can cause connectivity problems. Usually, though, it takes clocks off by a few hours (such as reversing AM/PM or being on the wrong day.) Troubleshooting this problem can be difficult because the connectivity issues don't usually indicate that it's a time sync problem.
WMI's authentication validates the host name being used to access the computer (unless a raw IP address is used.) If there are DNS or DHCP problems causing the wrong computer name to be used for connection, WMI will quite often fail. Windows DNS is notorious for stale records and it takes some effort and planning to ensure that a good DHCP setup prevents the same addresses being reused often. Then throw NetBIOS into the mix and once your name resolution is compromised then all bets are off.
The Double-Hop Problem
This problem only affects a certain type of remote administration, so while it's not generally encountered by most administrators, those who use this type of administration will almost always run into it. The double-hop problem
is, to put it simply, the inability for a security token to hop twice between machines. If you connect from computer A to computer B to execute a task (such as start a remote software install) and that task needs files on computer C then the security token can't hop again from B to C and access to those files will be denied. There are three solutions to this problem, the first is to configure a level of trust called "delegation" between computers, but this requires some high-level settings in Active Directory and is usually discouraged. The second solution is to send a user name and password from computer A to computer B so that a new token can be created to hop between B and C, but this is also problematic because it involves sending a password over the wire (possibly encrypted, but out in the open) and there aren't any native Windows tools that work that way. The third solution is to figure out a way to not need files on computer C, but this may not always be possible. So, while you may never run into this problem, when you do it can be a bear to deal with.
Hopefully this list will help any of you having security problems in your remote administration daily lives. Personally, I've filled out the immigration paperwork for the the land of gumdrop houses...