Archive - MSP RSS Feed

IAM Profitable: Get Your Piece of the IAM Market

IAM is Profitable

If you’re an MSP or an IT service provider, then you’re involved in a business model that’s always looking to improve its offerings and increase its bottom line. With the global IAM (Identity and Access Management) market increasing at an explosive rate, being able to offer authentication and password management isn’t just a smart move, it’s also a safe move!

How is offering IAM a safe move?

With stricter security compliance requirements being laid down by nearly every industry, country, and state, and with high-profile security breaches, like Home Depot, seeming to occur every month, businesses everywhere are finally opening their eyes to the risk their outdated password and security protocols pose.

This means that there is a definite need for these solutions, so the investment itself is safe. Also, having such a solution in-house is in itself a “safe” move. The market demand for IAM is due to the risk breaches pose. If you’re going to offer a way to mitigate that risk, why not take advantage of it yourself, and gain the same benefits you provide your clientele.

How is offering IAM a smart move?

If you can capitalize on potential customers’ need to update their security and authentication, then there is a lot of profit to be made. The key to doing so is differentiating yourself from your competition, and to accomplish this, you need to find an IAM solution.

What should you look for in an IAM solution?

There are innumerous small features which are nice to have, however, there are truly five key things you should look for first: comprehensiveness, cloud compatibility, multi-tenancy, vendor support, and the ability to integrate with your existing infrastructure.

Comprehensiveness

It’s not the number of tools you have that matters, it’s how effectively you’re able to use them. Many IAM products on the market these days focus only on a few aspects of the entire process. To find a winner, the IAM solution you decide upon should cover all the aspects your clients are facing, whether they require stronger authentication, password management, or even user auditing. As an added bonus, having fewer moving pieces (programs) decreases the chances of encountering a conflict when you’re setting up the solution, for yourself or your customers.

Cloud Compatibility

Systems that work in the cloud avoid one of the most difficult hurdles faced by service providers trying to provide IAM services: managing the internal servers. Moving to the cloud effectively puts those severs at an equidistant point from both the provider and the client. This makes the whole process that much simpler.

Multi-Tenancy

With multi-tenancy, you can easily separate the data of each client and yourself, while working within a single installation. This is absolutely critical for an MSP or IT professional providing password or security services to multiple clients. Multi-tenancy is designed for MSPs rather than end-users, eliminating the need for multiple installs and making the management process more efficient.

Vendor Support

When your client needs something quickly, you’re going to need some help unless you know everything about the solution you offer. While knowing more is always good, sometimes questions will elude you, and at that point you’ll be glad your vendor is available for some help, support, and insight.

Integration with Existing Systems

If you already have a number of systems in place that do various things, wouldn’t it be ideal if your new IAM solution integrated nicely with them? Whether it’s Kaseya on your network, or Office 365 on your clients, having an IAM solution that works with what you have is great, and if it’s designed to work with those products, then that’s even better.

If your clients (and potential clients) are looking for a solution to their security and authentication problems and you’ve gone with the wrong solution, your clients will be disappointed with the results. You will face an uphill battle of implementing new protocol and dealing with systems that just don’t make sense for you or your client. With the right solution you become the expert, an invaluable resource to your client. You become their solution, and then you’re able to easily resell the software because they will spread the word of how well it works for their needs.

High-profile security breach scandals are hitting the press with alarming frequency, and compliance standards are advancing at a pace that organizations simply can’t keep up with. Companies found in non-compliance could face fines or lose access to valuable industry resources. If your business is able to offer solutions to these problems, then clients will be handing you money to you in an attempt to make their problems go away. Your bottom line will move up just that much higher.

Now, before you go off full of hope for an increased in profit, looking for Identity and Access Management solutions for your business to offer, let me throw another factor into the mix. You’re reading this blog on the Kaseya website, then you’re likely a Kaseya customer. If you are, or you’re interested in becoming one, it is important to ensure that the solution you choose supports a Kaseya integration. Kaseya AuthAnvil is one such solution. Their suite fulfills the requirements set above, and offers single sign-on, password management, multi-factor authentication, and many other useful features. So, if you’re looking for a Kaseya-optimized IAM solution, there’s no better place to start.

For more information on offering IAM to your customers: Click Here
For more details on Kaseya AuthAnvil: Click Here

Author: Harrison Depner

Get Your Head Out of the Tech: A Realistic Look at Cloud Computing

Cloud Inspection

To understand new technologies, one must first get past the misinformation and pierce the veil of hype to see the product as it actually is. As you can see from the graph below, tech hype progresses in a fairly typical cycle. Currently, we’re just passing the peak of inflated expectations and are beginning to see the beginning of negative press. The relatively recent iCloud incident and death of Code Spaces are just the tip of the iceberg which soon will plunge cloud computing into trough of disillusionment, where it will remain until people realize what purpose cloud computing actually serves, climb the slope of enlightenment, and set out across the plateau of productivity. This same process happens with every major technology hitting the market. Video killed the radio star, and internet killed the video star, yet we still have radio stations, and television networks. The media simply hypes everything out of proportion.

In spite of the trend set by the media, many technologists try to provide realistic advice to people before they throw out their old technology in preparation for the new. Cloud computing isn’t going to eliminate the need for older systems. If anything, it will just augment their purpose. In the following post, I will outline five key elements of cloud computing in a way that shows their upsides and downsides.

Hype Cycle

Accessibility: Boon and Bane

If a user is on a business trip, they can access the same resources that they can at work. The simple ability to access resources from anywhere within the same network is a boon, as it removes much of the need for an internal infrastructure. Unfortunately, as was noted by a French Philosopher, British PM, and a man dressed up as a spider, “with great power comes great responsibility.” Accessibility without appropriate restriction is a highly dangerous risk. A cloud-based system on its own cannot know that your users should not be attempting to log in from Elbonia. If your system is made more accessible to your end-users, then it’s also being made more accessible to everyone else.

In a nutshell, IF your access security is well developed, then you can reap the benefits of increased availability, otherwise you’re going to have a bad time.

Maintenance: Can’t Someone Else Do IT?

This entry would have suited a different article entirely, but it works extremely well for the purpose of realistically portraying cloud computing.

There are two ways this scenario typically plays out. Your cloud-based service provider could be amazing — handling updates, resolving issues, and generally fixing everything before you even notice something has gone wrong. If that’s the case, then you’ve reduced the need for the services of your IT department and in-house infrastructure, thus significantly reducing overhead.

Unfortunately, such a result is not guaranteed, and if your provider leaves a lot to be desired, then your experience is going to be less than positive. Rather than staying ahead of new issues as your in-house techs did, your provider may instead do the bare minimum, only completing tasks when they’re specifically told to do so. Micromanagement is expensive, and the potential service outages resulting from poor service can be costlier than maintaining your old in-house IT infrastructure ever was.

In a nutshell, it all comes down to quality of service. If you move to the cloud and your provider is great, then things will run smoothly. If they’re less than stellar, then your experiences will reflect that.

Reliability: Now With More Points of Failure!

The reliability of a system can always be judged by the number of potential points of failure, and the redundancy (or lack thereof) surrounding those points. Cloud computing is very interesting in how it shifts the reliability of a system from hardware functionality, to relying on the availability of services.

Consider the following, if cloud based systems and in-house systems were both types of vehicles, then in-house would be some sort of SUV, while cloud-based would be some type of high-performance car. This means that their relative performance comes down to the presence of a well maintained road (internet connection). If the road is always going to be available, then the high-performance car will outright win; however, the moment they need to go off-road the SUV has a clear advantage.

I explain it this way, because it’s effective at pointing out the shortcoming of the cloud based model. If you have no internet, then you have no access. If you have an in-house infrastructure and the internet goes out, then work can still be done across the internal network. The high-performance cloud-mobile may be significantly less likely to break down, but without the internet providing access it will just sit idle during those periods.

Security: Something Old, Something New…

Security in the cloud is one of those hot-button topics, so let’s keep this as concise as possible. Companies like Code Spaces, which were bankrupted due to poor cloud security practices, provide a generous justification for their systems to be top-of-the-line. This means that cloud services and cloud service providers are often extremely focused on security. At the same time, there is no action without a cause. The reason why they are so security minded, is because they are aware that, in addition to the usual risks an in-house system may encounter, the new features which the cloud is built upon (such as multi-tenancy, shared resources, and availability) open up new vectors for attack which previously could only be theorized. This means that, while the security in the cloud is often quite strong, there are also new weaknesses which can or may circumvent those defenses.

Costs: You Get What You Pay For

In many instances, cloud service providers offer pay-for-usage models of pricing. This means that you pay based on the resources you are using, and the duration of the time they’re in use. In many cases, this is more cost effective than having the same systems in-house. This adaptability and scalability can be great for any business. On the flip-side, consider cloud based infrastructure the same way you would consider leasing a property. It can be more affordable and ideal to lease an office; however, in some cases it’s more cost effective and practical to buy the property. Whether or not you get a good cost-effective deal for your cloud-based infrastructure comes down to planning for your needs.

Whether you’re planning on migrating to the cloud, are remaining in-house, or are deciding on which you would prefer, the first step to building a strong IT infrastructure is finding the right platform to build upon. Kaseya was designed and built with security as the fundamental building block to its core architecture. To learn more: Click Here.

If you’re interested in some ways to protect your cloud-based IT infrastructure: Click Here.

Author Harrison Depner

IT Security Compliance Requirements and State Laws

State laws have always been a tricky subject when the internet gets involved. Unless your business is large enough to hire a squadron of legal representatives, you just have to accommodate for them. In this article, I’m going to outline three of these state laws which may apply to your business. Fair warning: This article should in no way be construed as legal advice. I’m not a lawyer and I don’t even play one on TV.

California Compliance Law

State: California

Law: CalOPPA (California Online Privacy Protection Act)

Who it applies to: Any commercial website or online service that collects personal information about “individual consumers residing in California who use or visit its commercial Web site or online service.”

What the law requires: CalOPPA can seem to be a fairly complicated law, so let’s break it down into a simpler form. This law focuses on how you handle personal information, and more specifically how your website or service responds to “Do Not Track” messages. This sounds like it could become difficult, but fortunately the law doesn’t require you to respond to “Do Not Track” messages. Instead it only requires that you disclose whether you do or don’t respond to those messages. In other words, you can ignore “Do Not Track” messages and collect personal information despite them; however, if you do that you will need to say so in your privacy policy.

If you decide instead to respond to “Do Not Track” messages, you will need to disclose how you respond, and while CalOPPA doesn’t specifically define how detailed your disclosure must be, it’s safe to assume that such disclosure should be accurate.

Fortunately most websites already have privacy policies, and adding a few lines that state you don’t respond to those messages, or alternately do and your practices around that, isn’t too difficult a task.

Nevada Compliance Law

State: Nevada

Law: NRS 603A (Security of Personal information)

Who it applies to: This law applies to “any governmental agency, institution of higher education, corporation, financial institution or retail operator or any other type of business entity or association that, for any purpose, whether by automated collection or otherwise, handles, collects, disseminates or otherwise deals with nonpublic personal information” of Nevada residents.

What the law requires: This security law sets forth a number of legal obligations for those to whom the law applies. In a nutshell, these obligations include:

  • Protocols surrounding the destruction of records containing personal information. (603A.200)
  • The maintenance of “reasonable security measures to protect” those records. (603A.210)
  • The disclosure of breaches which affected the stored personal information of NV residents. (603A.220)
  • Mandatory PCI Compliance for organizations that accept payment cards. (603A.227)
  • The encryption of Nevada residents PI in transmission, and during the movement of storage devices. (603A.227)

What does this mean in a general sense? Well, if this law applies to you or your clients’ businesses, then you have a lot of work to do. Fortunately, these compliance requirements are fairly typical and you may not have to make any changes at all if you’re already PCI compliant. If you do business with residents of Nevada and you’re not following these practices… well, I highly recommend you start working to follow these practices immediately. Some sources point out that this law technically has a national and international reach for any group handling the personal information of Nevada residents.

Massachusetts Compliance Law

State: Massachusetts

Law: 201 CMR 17.00

Who it applies to: Every person or organization that owns or licenses personal information about a resident of Massachusetts and electronically stores or transmits such information.

What the law requires: Fortunately this law is written in a fairly comprehensive way, so it is quite easy to explain. For those to whom this law applies, it is required that a comprehensive information security program exist, and that said program cover all computers and networks to the extent which is technically feasible. This security program, when feasible, is required to…

Have secure user authentication protocols which provide:

  • Control over user IDs and other identifiers.
  • Reasonably secure assignment and selection of passwords, or use of unique identifier technologies, such as multi-factor authentication.
  • Control of passwords to ensure they are kept in a location and/or format that does not compromise the security of the data they protect.
  • Restriction of access to active users and active user accounts only.
  • The ability to block access after multiple unsuccessful access attempts, or limitation placed for the particular system.

Secure access control measures that:

  • Restrict access to records and files containing personal information to those who need such information for their job.
  • Assign unique identifications and passwords, which are not the vendor supplied default to any person with access.

As well, the security program must include:

  • Encryption of all transmitted records and files containing PI which will travel across public networks or wirelessly.
  • Reasonable monitoring of systems for unauthorized use of or access to personal information.
  • Encryption of all personal information stored on laptops or other portable devices.
  • Require a reasonably up-to-date firewall protection and operating system security patches for systems containing personal information which are connected to the Internet.
  • Reasonably up-to-date versions of system security software which must include malware protection with reasonably up-to-date patches and virus definitions, or a version of such software that can still be supported with up-to-date patches and virus definitions, and is set to receive the most current security updates on a regular basis.
  • Education of employees on the proper use of the computer security system and personal information security.

As you can see, I saved the best for last. This law, just like the one from the state of Nevada, can have a national or international reach. Now I didn’t write all of this for you to panic about. I feel that these three laws serve as a good motivation for any business to improve their IT security and IT policies in general. Additionally, these three laws in combination provide a great framework that any business could build their IT security upon. Security is not the job of a single person, nor is it the job of a single business, instead it is a task for everyone.

The first step to building a good home is laying down a strong foundation. Similarly, the first step to building a strong and compliant IT infrastructure is finding the right platform to build upon. Kaseya was designed and built with security as the fundamental building block to its core architecture. To learn more: Click Here.

If you’re interested in learning more about PCI compliance: Click Here.

If you’re interested in another interesting compliance requirement for Law Enforcement: Click Here.

Author Harrison Depner

What can The Simpsons teach us about IT security?

Simpsons IT Security

When it comes to educating your users about IT security, there are a lot of wrong ways to connect the dots between concepts and practices. Simplistic training sessions can make your users feel ignorant, gullible, or even unintelligent. From my experience, the best practices tend to be those which are honest, informative, and entertaining. When you make your lessons entertaining, you can improve the amount of knowledge your employees retain, it’s just that simple.

With that in mind, let’s take a look at one lesson which won’t fail to entertain and inform your end users. Here are five lessons about IT Security we can learn from everyone’s favorite jaundiced TV family: The Simpsons.

Quote One: “Me fail English? That’s unpossible!” – Lisa on Ice (Simpsons S6E8)

Lesson in IT security: No-one, and nothing is infallible.

No matter how adept your computer security skills are, there will always be things which catch you unaware. Viruses, malware, and social engineering are continually being refined, and as such their potency is always greater than ever before. You may speak IT as your native language, but that doesn’t mean failure is unpossible.

Malware in the wild is only half of the equation, because Shadow IT also falls under this lesson. Most of the time, when you encounter an instance of Shadow IT, it’s just a user with the best of intentions. It could be a worker trying to improve their productivity, or a “tech savvy” user “improving” the security of their system. Unfortunately there is a strong correlation between Shadow IT and malware, and, while correlation doesn’t necessitate causation, in the world of IT security there’s usually a fire if you smell smoke. No-one is infallible, and when non-IT staff are free to install apps of their own volition, the risks become compounded.

Quote Two: “You tried your best and you failed miserably. The lesson is: never try.” – Burns’ Heir (Simpsons S5E18)

Lesson in IT security: IT Security is about risk mitigation, not risk elimination.

Let me say that again, IT security is about mitigation, not elimination. This quote is a solid example of the inverse of the rule, which is what many people believe. I’ve heard numerous end-users tell me that they “don’t bother running any of those anti-virus programs”, because they “used to pay for one and they got a virus anyways.”

“Anti-virus” programs, which are more accurately named “anti-malware” programs, are not infallible. The same goes for firewalls, any form of authentication, or any other IT security related product in existence. The only absolute in IT security is the absolute possibility of risk. That doesn’t mean the products do not work, in fact many are extremely effective at mitigating the risk of various attack angles, it’s just that there’s no such thing as a “silver-bullet product” which is capable of eliminating risk.

Quote Three: “Don’t worry, head. The computer will do our thinking now.” – The Computer Wore Menace Shoes (Simpsons S12E6)

Lesson in IT security: Having strong security practices does not mean that you can stop thinking about IT security.

A lot of professionals feel that automation can handle everything, including the security of their IT infrastructure. Unfortunately, that’s only a half-truth. Automation is a glorious tool for the IT professional. Mundane and advanced tasks can be automated so as to execute with more efficiency than ever before. Never again will driver updates be so strenuous a task. Unfortunately, maintaining security is less of a science, and more of an art form, and as such the human element is always critical.

Consider Cryptolocker, which has recently been seen distributing itself under the guise of a fax notification email. Short of sandboxing every internet browser across your entire network, there’s not a lot you can automate to stop this threat. If you pay attention to various security forums though, then you may have found people who had recently encountered that variant. With human intervention, you could then set up an email filter for any emails including the word “fax”, and inform your staff of the risk and how to avoid infection. When that level of automation is possible you can let the computer do your thinking, until that time though, you can’t simply assume your systems will be able to handle everything.

Quote Four: “They have the Internet on computers, now?” – Das Bus (Simpsons S9E14)

Lesson in IT security: Keeping your intranet internal and your DMZ demilitarized are no longer easy tasks.

Yes Homer, they have the internet on computers now. To be more accurate, they have the internet on everything now. Back in the day, keeping users off of unsecured connections was as easy as telling them that being caught with a personal modem in the office was a termination-worthy offense; however, with the prevalence of cell-phones and other portable devices, a far greater risk than the 2400 baud modem of yore lies in every employees pockets.

What this means is that endpoint security and security awareness training are more critical than ever before. You can’t always trust your users, but you can teach them to not trust themselves. That may sound like a candidate for “most depressing speech ever given to new employees”, but if they’re aware of the risk each of them poses to the security of your network, they may hesitate before using their smartphone to send out that confidential business information in the future.

Quote Five: “Cant someone else do it?” – Trash of the Titans (Simpsons S9E22)

Lesson in IT security: This final rule has an easy explanation. No, someone else cannot do it. IT security is everyone’s job.

This episode is one of the most memorable Simpsons episodes, and incidentally it’s also one of the most relevant lessons you can pass on to your users. How does garbage disposal tie in to IT security? Quite easily, just consider IT security like running a sanitation department.

Homer’s sanitation plan failed because of the inefficiency inherent in getting a third party to handle all of the jobs previously handled by the citizens. Why is it okay then, to have IT security be handled by a single department, or person? People take their garbage to the curb to decrease the work required of sanitation workers, it’s this collaboration that makes the process effective. It logically follows, that such collaboration would equally benefit an IT department. Minimize the work you place on your IT staff, if you bring them your security concerns, such as potential malware infections, rather than leave it to them to notice and/or figure out, then the entire process is streamlined. Work smarter and minimize the workload placed on IT’s shoulders, because, while someone else can do it, having someone else do it is extremely inefficient.

If you’re looking for even more ways to improve the efficiency of your IT staff, why not take a look at a system which offers innumerable utilities from a single pane of glass.

A properly implemented Single Sign-On solution can also drastically improve the efficiency of business. For more information on that subject: Click Here.

Author: Harrison Depner

Education and Mitigation: Improving IT Security Through User Education

School IT Security

Unless your network consists of a room full of users connecting to an unsecured consumer-grade router, the most vulnerable part of your network are your users. Technology is good at following rules consistently, while people are not. You can trust a computer not to install viruses on itself, it can be infected, but that’s not how it was designed to function. Technology may not always work the way it’s supposed to, but it’s not like the technology itself has any control over its actions. People on the other hand…Well, you just can’t trust people not to make bad decisions…

Even the Romans knew it. To err is human: Errare humanum est. -Seneca

Trusting in your users to do everything right is foolhardy; however, it’s quite possible to teach them not to trust themselves! In the field of IT security you should trust no one. Think about how much risk would be mitigated if you could pass that notion on to your users.

Would your average users stop opening random links people send them to featuring “10 cute kitten videos you have to see?” Probably not, but if we change the question a little and ask, “Would your users engage in that sort of risky behavior less often?” Then the answer becomes a definitive “yes.”

When it comes to educating your users about IT security, there are a lot of wrong ways to connect the dots. Simplistic training sessions can make your users feel ignorant, gullible, or even unintelligent. From my experience, the best practices tend to be those which are honest, informative, and relevant. Try having a brownbag lunch and discussing IT security issues that have recently received media coverage. People remember large events like when Sony was hacked, so you could work that into a lesson about why it’s dangerous to recycle passwords across websites. Make your lessons relatable and you will improve the amount of knowledge your employees retain. It’s just that simple.

Maybe this doesn’t apply to your business. Perhaps you work at an MSP where the most computer illiterate employee you have is the janitor from Elbonia who has his CIS degree printed on what looks to be a cereal box. Well, even then there’s still plenty to learn.

Work can be hectic and busy. There are always new patches to install, and break-fix work to do. After a certain point, it gets really easy to just become apathetic to the process. Well, no surprises here but, not embracing life-long learning is one of the worst possible things you can do. IT security isn’t something you can just learn and be done with, it’s a constantly changing and evolving field! You can memorize your ABCs, but the closest things to that I have seen in IT are the four cardinal rules of IT security.

Have you heard of the four cardinal rules? Probably not, because I’m sure my instructor was improvising when he taught us. That would explain why they’re pretty much the same as the four cardinal rules of gun safety. Well, here are those four rules, so read them and see if you pick up anything new!

  1. All guns are always loaded.

    Connecting things to a network is a lot like picking up a gun. It could be loaded (with malware), or be poorly manufactured, which adds the risk of it blowing up in your face. You might want to trust the ergonomic keyboards your techs brought from home, but even that can be risky.

    In short: Assume nothing, and check everything.

  2. Always point the muzzle in a safe direction.

    Patches, updates, hardware installations, this applies to everything. If you’re going to change anything on your network, don’t just plow ahead and do it. Aim those changes in a safe direction (like a test server, or non-critical system) and try things out there first. If things work well on the test server, then safely implement the changes across all systems. You wouldn’t play Russian-Roulette with your life on the line, so why would you do it with your network?

    In short: Test everything before it goes live.

  3. Keep your finger off the trigger until you are on target and ready to shoot.

    It’s good to stay on top of the most recent updates, but there’s a fine line between updating appropriately and excessively. Just because you can update to the newest beta version of Java doesn’t mean you should, and just because there’s a newer version of an OS, that doesn’t mean you need it.

    In short: Don’t change anything on the fly and don’t install anything without considering the results.

  4. Know your target, and what lies beyond.

    When changing anything, make sure you are fully aware of what it is, what it does, and what needs it. Consider what happened with the release of Windows Vista. Many businesses updated to Vista because their hardware supported it; unfortunately, a number of devices which relied on XP’s resources no longer functioned as a result. Users were scrambling to figure out why their printers, webcams, and other gadgets no longer worked, and it caused quite a headache for the people who supported those systems.

    In short: Do your research. Nothing is as modular as it seems, and updating something as innocuous as a printer could bring your network to its knees.

Above all else, always remember that you can never know too much. Keep on learning, keep reading those blogs, and keep reading those forums. You’ll never know if something you learned is relevant until you have to do it yourself.

Now, before you go looking for random lessons to train your coworkers on, let me throw one more factor into the mix. If you’re reading this blog at blog.kaseya.com, there’s a good chance that you’re likely a Kaseya customer. If you are, or you’re interested in becoming one, why not take a look at Kaseya University.

Kaseya University is a state-of-the-art training platform for Kaseya users. It utilizes an innovative blended learning approach to provide both structured and flexible access to technical product training. The Learning Center allows students to build a truly customized learning experience unique to their needs. Kaseya University is kept current with Kaseya product releases, and refreshed multiple times a year. To learn more about Kaseya University: Click here

With that knowledge you can accomplish even more from a single pane of glass.

If you want more information on IT security or just want some topic starters: Click Here

If you want a more direct approach for improving your IT Security: Click Here

Author: Harrison Depner

Haste Prevents Waste. Single Sign-On Can Improve Any MSPs Profit Margin

Single Sign-On Efficiency

As people gain access to more online resources, they need to remember an ever-increasing number of usernames and passwords. Unfortunately, having more usernames and passwords means spending more time spent keeping track of those usernames and passwords.

If you’re a business owner and you don’t have password management software, then you’re letting your employees manage their passwords on their own. Your users could be setting the stage for every IT security manager’s worst nightmare: an office full of sticky notes with user names and passwords clearly visible around their workstations or cubicles. Without some form of password management solution, your employees are suffering from ongoing frustration as they try to manage their passwords while following your IT security requirements.

If your business is already using password management software, then you should have a solution that manages which resources your employees are able to access, and which credentials they should use to do so. Unfortunately, your password system may not be doing everything it can to provide simple, and secure access for your employees.

What if there was a way for users to have strong passwords without the need to remember them, while also retaining a high degree of security?

Regardless of how you’re managing your passwords today, you can eliminate password frustration, increase your employees’ efficiency, and improve your IT security by implementing a single sign-on password management solution.

What is Single Sign-On?

Single sign-on (SSO) is a system through which users can access multiple applications, websites, and accounts by logging in to a single web portal just once. After the user has logged into the portal, he or she can access those resources without needing to enter additional user names or passwords.

Single sign-on is made possible by a password management system that stores each user’s login ID and password for each resource. When a user navigates from a single sign-on portal to a site or application, the password management system typically provides the user’s login credentials behind the scenes. From the users’ perspective, they appear to be logged in automatically.

High quality SSO solutions are able to provide access to a variety of internal and external resources by utilizing standard protocols such as SAML, WS-Fed, and WS-Trust.

As with any password management application, security is a critical consideration for SSO systems. Single sign-on is often implemented in conjunction with some form multi-factor authentication (MFA) to ensure that only authorized users are able to log into the SSO web portal.

5 Reasons MSPs Benefit from Single Sign-On

  1. SSO can create exceptionally strong password security. When paired with multi-factor authentication (MFA), single sign-on gives you a password management solution that can be both user friendly and extremely secure.
  2. SSO makes enforcing password policies easier. In addition to allowing for strong passwords for critical resources, an SSO system makes it easier to assign and maintain those passwords. In some cases, you can take users out of the password management process entirely—a good SSO system will allow you to can assign them behind the scenes, and change them as needed when your security needs evolve.
  3. Users won’t need or want to save passwords to their unsecure browser. To the average end user, the ability of a web browser like Chrome to remember and submit passwords is a huge bonus; however, while saved passwords offer some of the benefits of single sign-on, web browsers offer none of the security that comes with a true password management solution. When you implement an SSO system, you eliminate the temptation for employees to save their passwords in their browsers, because the SSO portal does that job instead, and often does it better. At that point you could remove that feature from their browsers without the risk of angering your users.
  4. Single sign-on makes your systems easier to secure. Rather than securing dozens or even hundreds of access points to your systems, your security administrators can focus the majority of their efforts on securing just one—the SSO system. If you pair the SSO system with multi-factor authentication, you’re your credentials will be more secure and manageable, than a collection of independently secured websites and systems.
  5. Reduced IT help desk calls. Experts estimate that the average employee calls the IT help desk for password assistance about four times per year. Given that an average IT helpdesk call takes about 20 minutes, that’s 80 minutes per year. That’s 160 minutes of wasted time (IT staff + end user) per year per end user. A good SSO solution will help you put that money back on your bottom line, and free your IT professionals to spend their time on more important projects.

Now, before you go looking for a single sign-on system for your business, let me throw one more factor into the mix. If you’re reading this blog at blog.kaseya.com, there’s a good chance that you’re likely a Kaseya customer. If you are, or you’re interested in becoming one, make sure that the solution you choose supports a Kaseya integration. Scorpion Software was acquired by Kaseya not long ago, and they offer a full Kaseya integration of their user authentication and password management suite. Their suite offers single sign-on, multi-factor authentication, and many other features. So, if you’re looking for a Kaseya-optimized suite, there’s no better place to start. That way you can accomplish even more from a single pane of glass.

If you want more information on what a good single sign-on system should do: Click Here

If you want to know what I would recommend as a single sign-on solution: Click Here

Author: Harrison Depner

The Significant Value of MSP Advanced Monitoring Services

Kaseya Monitor

There’s no doubt that server virtualization has had a tremendous impact on the IT operations of many small and mid-sized businesses (SMBs). For example, the benefits* have included:

  • Reduced administrative costs
  • Improved data resiliency
  • Better application availability
  • Greater business agility, e.g. faster time to market
  • Increased disaster recovery readiness, and even
  • Higher profitability and business growth

However, a recent survey report from VMWare and Forrester** suggests that SMBs may not be achieving the ROI they originally expected from virtualization. It also points to the fact that while the majority of SMBs expect their virtualized environments to grow, they are not able to optimize their server installations and are experiencing difficulties in meeting agreed-to IT service levels.

In particular, many SMBs are challenged to optimize the use of their existing servers. A major problem is lack of skilled resources. Partly this is due to the tight budget constraints that prevail in small and mid-sized companies. Partly it’s due to the difficulty of finding and hiring personnel with the right IT skills. The result is that there is a significant opportunity for MSPs to step into the breach and help.

The Forrester report indicates that the average SMB operates a hybrid-cloud environment. About half of their workloads are virtualized and Forrester expects further virtualization to occur, including the virtualization of strategic applications. Other research suggests that a majority of SMBs are now using public cloud services as well as private cloud services, including significant up-take of software-as-as-service (SaaS) and Infrastructure-as-a-service (IaaS) offerings. Coupled with these changes operating budgets have been moving from IT to line-of-business managers over time.

Together these factors amount to a considerable set of challenges, particularly for IT in mid-sized organizations. For example:

  • In virtualized environments applications share the processing, memory, storage and bandwidth resources made available by the host server. When one application begins to hog any of these resources performance can be impacted for the other applications. To overcome this, virtualized server loads need to be rebalanced on a frequent basis e.g. monthly. As installations grow this can be time consuming and impose unbudgeted costs for IT departments with constrained resources.
  • To provide for IT service continuity during maintenance, critical applications performance, and rapid disaster recovery, many virtualized environments support the dynamic switching of applications between servers. The benefits are significant but there is also a substantial impact on visibility. In the past when each application ran on its own server, troubleshooting was comparatively easy. With dynamic switching, knowing where an application was running at the exact instant of a fault so that root cause can be determined, can be difficult to identify.
  • Managing the performance of public cloud services is also challenging. While IaaS services, such as Amazon EC2, offer management APIs, most SaaS offerings do not provide management capabilities. The best that customers can expect from many of these services is availability guarantees. However, many SaaS applications run in the same kind of virtualized environments as their on-premise counterparts, which means they can be subject to the same kind of co-resident application instance interference. Yes they are available but the performance can definitely degrade during peak usage periods.
  • One of the expectations from virtualization was that it would free IT resources to assist business counterparts make better informed technology decisions. However, judging by the results so far, this has been hard for many SMBs to achieve. IT resources have been reduced during the economic downturn. Plus there’s an expectation that virtualization and self-service private cloud capabilities should significantly improve IT productivity. Lacking resources, IT is now often placed in a position where it’s easier to decline a request than to support it. The result is that line-of-business managers may view IT as the department of “no” versus the department of “know”.

MSPs who offer advanced monitoring services and can take on the risk of providing availability (up-time)-based SLAs are in a great position to help. Firstly, they have the skilled resources that can quickly support the virtualization growth plans of SMBs and to help them optimize their server farm installations. Secondly, they have tools which enable them to track, monitor and manage critical application service levels across the entire infrastructure, including being able to keep track of applications as they migrate dynamically between different virtual machines and different servers. Thirdly, they can provide detailed reporting and analyses to aid discussions about the infrastructure investments needed to maintain SLAs and to inform business/IT decision making.

Tools such as Kaseya Traverse support proactive service-level monitoring, enabling MSPs (and enterprise customers) to get advance warning of pending issues (such as memory/storage/bandwidth constraints) so that they can remediate potential problems before they impact service levels. In addition, by tracking business services (such as supply chain applications) at the highest level, while still being able to drill-down to the appropriate server or virtual machine, Traverse allows MSPs to quickly and accurately identify route causes even in the most complex of environments. Add to that support for public cloud APIs, predictive analytics and a powerful reporting capability, and Traverse-equipped MSPs are primed to provide valuable support for today’s mid-sized companies and their hybrid-cloud environments.

By helping the IT departments of mid-sized companies meet their SLA mandates, MSPs can help free in-house IT staff to better respond to business requests, can bolster the reputation of IT within their own organizations, and can help provide the detailed intelligence needed for IT to add strong value in conversations regarding business innovation.

Learn more about how Kaseya technology can help you create advanced managed services.
Read our whitepaper, Proactive Service Level Monitoring: A Must Have for Advanced MSPs.

What tools are you using to manage your IT services?

Author: Ray Wright

References:

* The Benefits of Virtualization for Small and Medium Businesses

** Expand Your Virtual Infrastructure With Confidence And Control

MSP Best Practice: 4 Keys to Automation

Creating Automation

The benefits of automation were lauded as far back as 1908 when Henry Ford created the assembly line to manufacture his famous “any color you like as long as it’s black” Model T. Before assembly lines were introduced, cars were built by skilled teams of engineers, carpenters and upholsterers who worked to build vehicles individually. Yes, these vehicles were “hand crafted” but the time needed and the resultant costs were both high. Ford’s assembly line stood this traditional paradigm on its end. Instead of a team of people going to each car, cars now came to a series of specialized workers. Each worker would repeat a set number of tasks over and over again, becoming increasingly proficient, reducing both production time and cost. By implementing and refining the process, Ford was able to reduce the assembly time by over 50% and reduce the price of the Model T from $825 to $575 in just four years.

Fast forward a hundred years (or so) and think about the way your support capabilities work now. Does your MSP operation function like the teams of pre-assembly line car manufacturers or have you implemented automated processes? Some service providers and many in-house IT services groups still function like the early car manufacturers. The remediation process kicks off when an order (trouble ticket) arrives. Depending on the size (severity) of the order one or more “engineers” are allocated to solving the problem. Simple issues may be dealt with by individual support staff but more complex issues – typically those relating to poor system performance or security vs. device failures – can require the skills of several people – specialists in VMware, networking, databases, applications etc. Unfortunately, unlike the hand-crafted car manufactures who sold to wealthy customers, MSPs can’t charge more for more complex issues. Typically you receive a fixed monthly fee based on the number of devices or seats you support.

So how can you “bring the car to the worker” rather than vice-versa? Automation for sure, but how does it work? What are the key steps you need to take?

  1. Be proactive – the first and most important step is to be proactive. Like Ford with Model T manufacturing, you already know what it takes to keep a customer’s IT infrastructure running. If you didn’t you wouldn’t be in the MSP business. Use that knowledge to plan out all the proactive actions that need to take place in order to prevent problems from occurring in the first place. A simple example is patch management. Is it automated? If not, as the population of supported devices grows it’s going to take you longer and longer to complete each update. The days immediately after a patch is released are often the most crucial. If the release eliminates a security vulnerability the patch announcement can alert hackers to the fact and spur them to attack systems before the patch gets installed. If that happens, now there’s much more to do to eliminate the malware and clean up whatever mess it caused. Automating patch management saves time and gets patches installed as quickly as possible.
  1. Standardize – develop a check list of technology standards that you can apply to every similar device and throughout each customer’s infrastructure. Standards such as common anti-virus and back-up processes; common lists of recommended standard applications and utilities; recommended amounts of memory and standard configurations, particularly of network devices. By developing standards you’ll take a lot of guess work out of trouble-shooting. You’ll know if something is incorrectly configured or if a rogue application is running. And by automating the set-up of new users, for example, you can ensure that they at least start out meeting the desired standards. You can even establish an automated process to audit the status of each device and report back when compliance is contravened. The benefit to your customers is fewer problems and faster time to problem resolution. Don’t set your standards so tightly that you can meet customers’ needs but do set their expectations during the sales process so that they know why you have standards and how they help you deliver better services.
  1. Policy management – beyond standards are policies. These are most likely concerned with the governance of IT usage. Policy covers areas such as access security, password refresh, allowable downloads, application usage, who can action updates etc. Ensuring that users comply with the policies required by your customers and implied by your standards is another way to reduce the number of trouble tickets that get generated. Downloading unauthorized applications or even unveted updates to authorized applications can expose systems to “bloatware”. At best this puts a drain on system resources and can impact productivity, storage capacity and performance. At worst, users may be inadvertently downloading malware, with all of its repercussions. Setting up proactive policy management can prevent the unwanted actions from the outset. Use policy management to continuously check compliance.
  1. Continuously review – even when you have completed the prior three steps there is still much more that can be done. Being proactive will have made a significant impact on the number of trouble tickets being generated. But they will never get to zero – the IT world is just far too complex. However, by reviewing the tickets you can discover further areas where automation may help. Are there particular applications that cause problems, particular configurations, particular user habits etc.? By continuously reviewing and adjusting your standards, policy management and automation scripts you will be able to further decrease the workload on your professional staff and more more easily be able to “bring the car (problem)” to the right specialist.

As Henry Ford knew, automation is a powerful tool that will help you to reduce the number of trouble tickets generated and, more importantly the number of staff needed to deal with them. By reducing the volume and narrowing the scope, together with the right management tools, you’ll be able to free up staff time to help improve drive new business, improve customer satisfaction and ultimately increase your sales. By 1914 – 6 years after he started – Henry had an estimated 48% of the US automobile market!

What tools are you using to manage your IT services?

Author: Ray Wright

MSP Best Practice: Thoughts on Being a Trusted Advisor

Kaseya

Nothing is more important for MSPs than retaining existing customers and having the opportunity to upsell new and more profitable services. The cost of customer acquisition can make it challenging to profit significantly during an initial contract period. Greater profitability typically comes from continuing to deliver services year-after-year and from attaching additional services to the contract as time goes on. Becoming a trusted advisor to customers, so that you are both highly regarded and have an opportunity to influence their purchase decisions, has always been important to this process.  However, how you become a trusted advisor and how quickly, depend on some key factors.

Challenge your customer’s thinking

According to Matthew Dixon and Brent Adamson, authors of “The Challenger Sale*”, it’s not what you sell that matters it’s how you sell it! When discussing why B2B customers keep buying or become advocates – in short, how they become loyal customers – the unexpected answer is that their purchase experiences have more impact than the combined effect of a supplier’s brand, products/services, and their value-add!

Not that these factors aren’t important – they clearly are vital too – it’s just that beyond their initial purchase needs, what customers most want from suppliers is help with the tough business decisions they have to make. This is especially true when it comes to technology decisions. The best service providers have great experience helping other, similar, companies solve their challenges and are willing to share their knowledge. They sit on the same side of the table as the customer and help evaluate alternatives in a thoughtful and considerate fashion. In short, they operate as trusted advisors.

The key is to start out as you mean to continue. How can you make every customer interaction valuable from the very outset, even before your prospect becomes a customer? Dixon and Adamson suggest the best way is to challenge their thinking with your commercial insight into their business. What might be the potential business outcomes of contracting your managed services? Yes, they will benefit from your expertise in monitoring and maintaining their IT infrastructure, but in addition, how can the unique characteristics of your services and your professional resources enable new business opportunities for them? What might those opportunities be?

Tools matter

Beyond insights gained from working closely with other customers, having the right tools can have a significant impact too. For example, there are monitoring and management tools that can be used to provide visibility into every aspect of a customer’s IT environment. But tools which are focused on a single device or technology area or are purely technical in nature, have only limited value when it comes to demonstrating support for customers’ business needs. Most customers have a strong interest in minimizing costs and reducing the likelihood and impact of disruption, such as might be caused by a virus or other malware. Being able to discuss security, automation and policy management capabilities and show how these help reduce costs is very important during the purchase process.

Customers who absolutely rely on IT service availability to support their business require greater assurance. Tools that cut through the complexity and aggregate the details to provide decision makers the information they care about, are of immense value. For example, the ability to aggregate information at the level of IT or business service and report on the associated service level agreement (SLA). Better still, the ability to proactively determine potential impacts to the SLA, such as growing network traffic or declining storage availability, so that preventative actions can take place. Easy to understand dashboards and reports showing these results can then be used in discussions about future technology needs, further supporting your trusted advisor approach.

With the right tools you also have a means to demonstrate how you will be able to meet a prospect’s service-level needs and their specific infrastructure characteristics, during the sales process. In IT, demonstrating how you will achieve what you promise is as important, if not more so, than what you promise. How will you show that you can deliver 97% service availability, reliably and at low risk to you and to your potential customer? Doing so adds to your trusted advisor stature.

Thought leadership

Unless you have a specific industry focus, most customers don’t expect you to know everything about their business – just about how your services and solutions can make a strong difference. They will prefer to do business with market leaders and with those who demonstrate insight and understanding of the latest information technology trends, such as cloud services and mobility, and the associated management implications. First, it reduces their purchase risk and makes it more likely that they will actually make a decision to purchase. Secondly it adds credibility, again enabling them to see you as a trusted advisor, one who demonstrates a clear understanding of market dynamics and can assist them in making the right decisions.

Credibility depends a lot on how you support those insights with your own products and services. Are you telling customers that cloud and mobile are increasingly important but without having services that support these areas? What about security and applications support?

Being a trusted advisor means that you, and your organization, must be focused on your customers’ success. Make every interaction of value, leverage the right tools and deliver advanced services and you will quickly be seen as a trusted advisor and be able to turn every new customer into a lasting one and a strong advocate.

Learn more about how Kaseya technology can help you create advanced managed services. Read our whitepaper, Proactive Service Level Monitoring: A Must Have for Advanced MSPs.

*Reference: The Challenger Sale, Matthew Dixon and Brent Adamson, Portfolio/Penguin 2011

What tools are you using to manage your IT services?

Author: Ray Wright

IT Best Practices: Minimizing Mean Time to Resolution

Mean Time to Repair/Resolve

Driving IT efficiency is a topic that always makes it to the list of top issues for IT organizations and MSPs alike when budgeting or discussing strategy. In a recent post we talked about automation as a way to help reduce the number of trouble tickets and, in turn, to improve the effective use of your most expensive asset – your professional services staff. This post looks at the other side of the trouble ticket coin – how to minimize the mean time to resolution of problems when they do occur and trouble tickets are submitted.

The key is to reduce the variability in time spent resolving issues. Easier said than done? Mean Time to Repair/Resolve (MTTR) can be broken down into 4 main activities, as follows:

  • Awareness: identifying that there is an issue or problem
  • Root-cause: understanding the cause of the problem
  • Remediation: fixing the problem
  • Testing: that the problem has been resolved

Of these four components awareness, remediation, and testing tend to be the smaller activities and also the less variable ones.

The time taken to become aware of a problem depends primarily on the sophistication of the monitoring system(s). Comprehensive capabilities that monitor all aspects of the IT infrastructure and group infrastructure elements into services tend to be the most productive. Proactive service level monitoring (SLM) enables IT operations to view systems across traditional silos (e.g. network, server, applications) and to analyze the performance trends of the underlying service components. By developing trend analyses in this way, proactive SLM management can identify future issues before they occur. For example, when application traffic is expanding and bandwidth is becoming constrained or when server storage is reaching its limit. When unpredicted problems do occur, being able to quickly identify their severity, eliminate downstream alarms and determine business impact, are also important factors in helping to contain variability and deploy the correct resources for maximum impact.

Identifying the root cause is usually the biggest cause of MTTR variability and the one that has the highest cost associated with it. Once again the solution lays both with the tools you use and the processes you put in place. Often management tools are selected by each IT function to help with their specific tasks – the network group will have in-depth network monitoring capabilities, the database group database performance tools, and so on. These tools are generally not well integrated and lack visibility at a service level. Also correlation using disparate tools is often manpower intensive, requiring staff from each function to meet and to try to avoid the inevitable “finger-pointing”.

The service level view is important, not only because it provides visibility into business impact, but also because it represents a level of aggregation from which to start the root cause analysis. Many IT organizations start out by using open source free tools but soon realize there is a cost to “free” as their infrastructures grow in size and complexity. Tools that look at individual infrastructure aspects can be integrated but, without underlying logic, they have a hard time correlating events and reliably identifying root cause. Poor diagnostics can be as bad as no diagnostics in more complex environments. Investigating unnecessary down-stream alarms to make sure they are not separate issues is a significant waste of resources.

Consider the frequently cited cause of MTTR variability – poor application performance. In this case there is nothing specifically “broken” so it’s hard to diagnose with point tools. A unified dashboard that shows both application process metrics and network or packet level metrics provides a valuable diagnostic view. As a simple example, a response time application could send an alert that the response time of an application is too high. Application performance monitoring data might indicate that a database is responding slowly to queries because the buffers are starved and the number of transactions is abnormally high. Integrating with network netflow or packet data allows immediate drill down to isolate which client IP address is the source of the high number of queries. This level of integration speeds the root cause analysis and easily removes the finger-pointing so that the optimum remedial action can be quickly identified.

Once a problem has been identified the last two pieces of the MTTR equation can be satisfied. The times required for remediation and testing tend to be far less variable and can be shortened by defining clear processes and responsibilities. Automation can also play a key role. For example, a great many issues are caused by miss-configuration. Rolling back configurations to the last good state can be done automatically, quickly eliminating issues even while in-depth root-cause analysis continues. Automation can plays a vital role in testing too, by making sure that performance meets requirements and that service levels have returned to normal.

To maximize IT efficiency and effectiveness and help minimize mean time to resolution, IT management systems can no longer work in vertical or horizontal isolation. The inter-dependence between services, applications, servers, cloud services and network infrastructure mandate the adoption of comprehensive service-level management capabilities for companies with more complex IT service infrastructures. The amount of data generated by these various components is huge and the rate of generation is so fast that traditional point tools cannot integrate or keep up with any kind of real time correlation.

Learn more about how Kaseya technology can help you manage your increasingly complex IT services. Read our whitepaper, Managing the Complexity of Today’s Hybrid IT Environments

What tools are you using to manage your IT services?

Author: Ray Wright

Page 1 of 41234»
-->