Archive - News RSS Feed

IAM Profitable: Get Your Piece of the IAM Market

IAM is Profitable

If you’re an MSP or an IT service provider, then you’re involved in a business model that’s always looking to improve its offerings and increase its bottom line. With the global IAM (Identity and Access Management) market increasing at an explosive rate, being able to offer authentication and password management isn’t just a smart move, it’s also a safe move!

How is offering IAM a safe move?

With stricter security compliance requirements being laid down by nearly every industry, country, and state, and with high-profile security breaches, like Home Depot, seeming to occur every month, businesses everywhere are finally opening their eyes to the risk their outdated password and security protocols pose.

This means that there is a definite need for these solutions, so the investment itself is safe. Also, having such a solution in-house is in itself a “safe” move. The market demand for IAM is due to the risk breaches pose. If you’re going to offer a way to mitigate that risk, why not take advantage of it yourself, and gain the same benefits you provide your clientele.

How is offering IAM a smart move?

If you can capitalize on potential customers’ need to update their security and authentication, then there is a lot of profit to be made. The key to doing so is differentiating yourself from your competition, and to accomplish this, you need to find an IAM solution.

What should you look for in an IAM solution?

There are innumerous small features which are nice to have, however, there are truly five key things you should look for first: comprehensiveness, cloud compatibility, multi-tenancy, vendor support, and the ability to integrate with your existing infrastructure.

Comprehensiveness

It’s not the number of tools you have that matters, it’s how effectively you’re able to use them. Many IAM products on the market these days focus only on a few aspects of the entire process. To find a winner, the IAM solution you decide upon should cover all the aspects your clients are facing, whether they require stronger authentication, password management, or even user auditing. As an added bonus, having fewer moving pieces (programs) decreases the chances of encountering a conflict when you’re setting up the solution, for yourself or your customers.

Cloud Compatibility

Systems that work in the cloud avoid one of the most difficult hurdles faced by service providers trying to provide IAM services: managing the internal servers. Moving to the cloud effectively puts those severs at an equidistant point from both the provider and the client. This makes the whole process that much simpler.

Multi-Tenancy

With multi-tenancy, you can easily separate the data of each client and yourself, while working within a single installation. This is absolutely critical for an MSP or IT professional providing password or security services to multiple clients. Multi-tenancy is designed for MSPs rather than end-users, eliminating the need for multiple installs and making the management process more efficient.

Vendor Support

When your client needs something quickly, you’re going to need some help unless you know everything about the solution you offer. While knowing more is always good, sometimes questions will elude you, and at that point you’ll be glad your vendor is available for some help, support, and insight.

Integration with Existing Systems

If you already have a number of systems in place that do various things, wouldn’t it be ideal if your new IAM solution integrated nicely with them? Whether it’s Kaseya on your network, or Office 365 on your clients, having an IAM solution that works with what you have is great, and if it’s designed to work with those products, then that’s even better.

If your clients (and potential clients) are looking for a solution to their security and authentication problems and you’ve gone with the wrong solution, your clients will be disappointed with the results. You will face an uphill battle of implementing new protocol and dealing with systems that just don’t make sense for you or your client. With the right solution you become the expert, an invaluable resource to your client. You become their solution, and then you’re able to easily resell the software because they will spread the word of how well it works for their needs.

High-profile security breach scandals are hitting the press with alarming frequency, and compliance standards are advancing at a pace that organizations simply can’t keep up with. Companies found in non-compliance could face fines or lose access to valuable industry resources. If your business is able to offer solutions to these problems, then clients will be handing you money to you in an attempt to make their problems go away. Your bottom line will move up just that much higher.

Now, before you go off full of hope for an increased in profit, looking for Identity and Access Management solutions for your business to offer, let me throw another factor into the mix. You’re reading this blog on the Kaseya website, then you’re likely a Kaseya customer. If you are, or you’re interested in becoming one, it is important to ensure that the solution you choose supports a Kaseya integration. Kaseya AuthAnvil is one such solution. Their suite fulfills the requirements set above, and offers single sign-on, password management, multi-factor authentication, and many other useful features. So, if you’re looking for a Kaseya-optimized IAM solution, there’s no better place to start.

For more information on offering IAM to your customers: Click Here
For more details on Kaseya AuthAnvil: Click Here

Author: Harrison Depner

IT Management Community Participation Extends Knowledge and Adds Value

Waltham Community Meetup

This week I attended a Kaseya “Local Meetup” event in Waltham, Massachusetts, and it struck me again just how important it is to have a strong IT community. In the Meetup evaluation forms, virtually everyone who attended said that sharing ideas with like-minded people was a key benefit to attending the event. Without exception, everyone left the meeting with new contacts and friendships in the IT management community.

A few things about the meeting really hit home:

Tips and Tricks:

Kirk Feathers, a leader in the Kaseya technical community, led a “Tips and Tricks” session, sharing interesting and innovative approaches to maximize the usage and benefit of IT management tools, both from Kaseya and its partners. Everyone in the room chimed in, asking questions and offering their own insights. Copious notes were being taken. And more than once, two or more people set up follow on conversations on particular topics.

Collaboration Groups:

Establishing collaboration groups is a great way to stay in touch and share information. Chris Anderson, Director of Managed Services for Infranet Solutions in Quincy Massachusetts, shared a great story about collaboration groups. I met Chris earlier this year at “Kaseya Connect,” our annual user conference. During his three days at the event, Chris made it a point to build out his community contacts to the point where he is now part of a formal group which is sharing automation scripts. Using existing scripts and creating new ones is key to efficiently and effectively managing large numbers of endpoints. Chris tells me that the collaboration group’s sharing of ideas and actual scripts is substantially improving their speed-to-automation.

Feedback and Input:

Mads Srinivasan, product manager for Kaseya’s mobility management solution, shared the latest mobility management development work, complete with a demonstration. The purpose was to obtain feedback and input from the group on the features and presentation layer. The session had a good 30 minutes of excellent feedback and suggestions. Mads had an ulterior motive in that he wants 100 beta customers to test out the latest work; virtually everyone in the room signed up.

Time before and after the event was reserved for networking and everyone took advantage. People had a chance to meet the many Kaseya leaders who were present, but more importantly, they built out their IT management community connections. By the end of the event, business cards were swapped, and emails were exchanged all around.

This experience also reinforced the importance of the “Kaseya Community” program, which includes sponsoring these local Meetups, forums for sharing, event postings, etc. All Kaseya users should join to share information and learn about the latest happenings.

Author: Tom Hayes

Dropbox wasn’t hacked. Some of their users just dropped the box…

Dropbox Security Breach

A mixed metaphor never hurt anyone, but when you mix your passwords into everything it’s not going to go well.

Password mixing (reusing passwords) is what many believe was the cause of the recent Dropbox account “breach.” Using the same passwords for everything is a huge problem. A chain is only as strong as its weakest link, and with passwords the same applies. The more websites you use a password on, the more likely it is to be leaked in a breach, and unfortunately, the reach and potential for damages from that breach also becomes greater.

Reused Password Graph

It’s not a difficult concept if you consider it for long. If one password is used on five websites, then that password is five times as likely to be leaked, as there are five times as many locations where that password is being stored. At the same time, that password provides access to five times as many websites, which means that there’s potentially greater than five times the amount of information available to the person accessing it than one account would have on its own. The more information they have, the easier it becomes to gain access to other accounts. This appears to be what happened with Dropbox.

Think of it this way, if I gain access to your email, then I can reset the passwords of almost every account tied to that email. What are the chances that your email contains information about your choice of banking institution, online shopping account, or PayPal perhaps?

This wasn’t a breach of Dropbox’s systems; it was a failure of their end-users’ password management skills. When users reuse their passwords across so many websites, they sow the seeds of their own ruin.

For system administrators, the source of this problem is painfully apparent. Quite often, a system administrator will have to remember ten or more passwords just for their day-to-day tasks. Add onto that the 20 or so personal accounts that need passwords and the 30 passwords needed for various lesser-used accounts and systems, and you wind up with an obscene amount of passwords to remember. Now consider every end-user that the system administrator manages. How many passwords do you think those end-users each have?

This is why password reuse is such a problem. There are just too many passwords for anyone to handle!

That’s why you need some sort of solution to the password problem. Now, there’s no need to hire some developer to build you a password management system, you just need a password management solution. Let’s throw one more factor into the mix. If you’re reading this blog, there’s a good chance that you’re already a Kaseya customer. If so, then make sure that the solution you choose supports a Kaseya integration. That way you can accomplish even more from a single pane of glass.

Only Kaseya AuthAnvil solves that problem, allowing organizations to secure their most valuable asset – their data – by minimizing the risk of password-related security breaches. Learn more about AuthAnvil.

Author Harrison Depner

Get Your Head Out of the Tech: A Realistic Look at Cloud Computing

Cloud Inspection

To understand new technologies, one must first get past the misinformation and pierce the veil of hype to see the product as it actually is. As you can see from the graph below, tech hype progresses in a fairly typical cycle. Currently, we’re just passing the peak of inflated expectations and are beginning to see the beginning of negative press. The relatively recent iCloud incident and death of Code Spaces are just the tip of the iceberg which soon will plunge cloud computing into trough of disillusionment, where it will remain until people realize what purpose cloud computing actually serves, climb the slope of enlightenment, and set out across the plateau of productivity. This same process happens with every major technology hitting the market. Video killed the radio star, and internet killed the video star, yet we still have radio stations, and television networks. The media simply hypes everything out of proportion.

In spite of the trend set by the media, many technologists try to provide realistic advice to people before they throw out their old technology in preparation for the new. Cloud computing isn’t going to eliminate the need for older systems. If anything, it will just augment their purpose. In the following post, I will outline five key elements of cloud computing in a way that shows their upsides and downsides.

Hype Cycle

Accessibility: Boon and Bane

If a user is on a business trip, they can access the same resources that they can at work. The simple ability to access resources from anywhere within the same network is a boon, as it removes much of the need for an internal infrastructure. Unfortunately, as was noted by a French Philosopher, British PM, and a man dressed up as a spider, “with great power comes great responsibility.” Accessibility without appropriate restriction is a highly dangerous risk. A cloud-based system on its own cannot know that your users should not be attempting to log in from Elbonia. If your system is made more accessible to your end-users, then it’s also being made more accessible to everyone else.

In a nutshell, IF your access security is well developed, then you can reap the benefits of increased availability, otherwise you’re going to have a bad time.

Maintenance: Can’t Someone Else Do IT?

This entry would have suited a different article entirely, but it works extremely well for the purpose of realistically portraying cloud computing.

There are two ways this scenario typically plays out. Your cloud-based service provider could be amazing — handling updates, resolving issues, and generally fixing everything before you even notice something has gone wrong. If that’s the case, then you’ve reduced the need for the services of your IT department and in-house infrastructure, thus significantly reducing overhead.

Unfortunately, such a result is not guaranteed, and if your provider leaves a lot to be desired, then your experience is going to be less than positive. Rather than staying ahead of new issues as your in-house techs did, your provider may instead do the bare minimum, only completing tasks when they’re specifically told to do so. Micromanagement is expensive, and the potential service outages resulting from poor service can be costlier than maintaining your old in-house IT infrastructure ever was.

In a nutshell, it all comes down to quality of service. If you move to the cloud and your provider is great, then things will run smoothly. If they’re less than stellar, then your experiences will reflect that.

Reliability: Now With More Points of Failure!

The reliability of a system can always be judged by the number of potential points of failure, and the redundancy (or lack thereof) surrounding those points. Cloud computing is very interesting in how it shifts the reliability of a system from hardware functionality, to relying on the availability of services.

Consider the following, if cloud based systems and in-house systems were both types of vehicles, then in-house would be some sort of SUV, while cloud-based would be some type of high-performance car. This means that their relative performance comes down to the presence of a well maintained road (internet connection). If the road is always going to be available, then the high-performance car will outright win; however, the moment they need to go off-road the SUV has a clear advantage.

I explain it this way, because it’s effective at pointing out the shortcoming of the cloud based model. If you have no internet, then you have no access. If you have an in-house infrastructure and the internet goes out, then work can still be done across the internal network. The high-performance cloud-mobile may be significantly less likely to break down, but without the internet providing access it will just sit idle during those periods.

Security: Something Old, Something New…

Security in the cloud is one of those hot-button topics, so let’s keep this as concise as possible. Companies like Code Spaces, which were bankrupted due to poor cloud security practices, provide a generous justification for their systems to be top-of-the-line. This means that cloud services and cloud service providers are often extremely focused on security. At the same time, there is no action without a cause. The reason why they are so security minded, is because they are aware that, in addition to the usual risks an in-house system may encounter, the new features which the cloud is built upon (such as multi-tenancy, shared resources, and availability) open up new vectors for attack which previously could only be theorized. This means that, while the security in the cloud is often quite strong, there are also new weaknesses which can or may circumvent those defenses.

Costs: You Get What You Pay For

In many instances, cloud service providers offer pay-for-usage models of pricing. This means that you pay based on the resources you are using, and the duration of the time they’re in use. In many cases, this is more cost effective than having the same systems in-house. This adaptability and scalability can be great for any business. On the flip-side, consider cloud based infrastructure the same way you would consider leasing a property. It can be more affordable and ideal to lease an office; however, in some cases it’s more cost effective and practical to buy the property. Whether or not you get a good cost-effective deal for your cloud-based infrastructure comes down to planning for your needs.

Whether you’re planning on migrating to the cloud, are remaining in-house, or are deciding on which you would prefer, the first step to building a strong IT infrastructure is finding the right platform to build upon. Kaseya was designed and built with security as the fundamental building block to its core architecture. To learn more: Click Here.

If you’re interested in some ways to protect your cloud-based IT infrastructure: Click Here.

Author Harrison Depner

IT Security Compliance Requirements and State Laws

State laws have always been a tricky subject when the internet gets involved. Unless your business is large enough to hire a squadron of legal representatives, you just have to accommodate for them. In this article, I’m going to outline three of these state laws which may apply to your business. Fair warning: This article should in no way be construed as legal advice. I’m not a lawyer and I don’t even play one on TV.

California Compliance Law

State: California

Law: CalOPPA (California Online Privacy Protection Act)

Who it applies to: Any commercial website or online service that collects personal information about “individual consumers residing in California who use or visit its commercial Web site or online service.”

What the law requires: CalOPPA can seem to be a fairly complicated law, so let’s break it down into a simpler form. This law focuses on how you handle personal information, and more specifically how your website or service responds to “Do Not Track” messages. This sounds like it could become difficult, but fortunately the law doesn’t require you to respond to “Do Not Track” messages. Instead it only requires that you disclose whether you do or don’t respond to those messages. In other words, you can ignore “Do Not Track” messages and collect personal information despite them; however, if you do that you will need to say so in your privacy policy.

If you decide instead to respond to “Do Not Track” messages, you will need to disclose how you respond, and while CalOPPA doesn’t specifically define how detailed your disclosure must be, it’s safe to assume that such disclosure should be accurate.

Fortunately most websites already have privacy policies, and adding a few lines that state you don’t respond to those messages, or alternately do and your practices around that, isn’t too difficult a task.

Nevada Compliance Law

State: Nevada

Law: NRS 603A (Security of Personal information)

Who it applies to: This law applies to “any governmental agency, institution of higher education, corporation, financial institution or retail operator or any other type of business entity or association that, for any purpose, whether by automated collection or otherwise, handles, collects, disseminates or otherwise deals with nonpublic personal information” of Nevada residents.

What the law requires: This security law sets forth a number of legal obligations for those to whom the law applies. In a nutshell, these obligations include:

  • Protocols surrounding the destruction of records containing personal information. (603A.200)
  • The maintenance of “reasonable security measures to protect” those records. (603A.210)
  • The disclosure of breaches which affected the stored personal information of NV residents. (603A.220)
  • Mandatory PCI Compliance for organizations that accept payment cards. (603A.227)
  • The encryption of Nevada residents PI in transmission, and during the movement of storage devices. (603A.227)

What does this mean in a general sense? Well, if this law applies to you or your clients’ businesses, then you have a lot of work to do. Fortunately, these compliance requirements are fairly typical and you may not have to make any changes at all if you’re already PCI compliant. If you do business with residents of Nevada and you’re not following these practices… well, I highly recommend you start working to follow these practices immediately. Some sources point out that this law technically has a national and international reach for any group handling the personal information of Nevada residents.

Massachusetts Compliance Law

State: Massachusetts

Law: 201 CMR 17.00

Who it applies to: Every person or organization that owns or licenses personal information about a resident of Massachusetts and electronically stores or transmits such information.

What the law requires: Fortunately this law is written in a fairly comprehensive way, so it is quite easy to explain. For those to whom this law applies, it is required that a comprehensive information security program exist, and that said program cover all computers and networks to the extent which is technically feasible. This security program, when feasible, is required to…

Have secure user authentication protocols which provide:

  • Control over user IDs and other identifiers.
  • Reasonably secure assignment and selection of passwords, or use of unique identifier technologies, such as multi-factor authentication.
  • Control of passwords to ensure they are kept in a location and/or format that does not compromise the security of the data they protect.
  • Restriction of access to active users and active user accounts only.
  • The ability to block access after multiple unsuccessful access attempts, or limitation placed for the particular system.

Secure access control measures that:

  • Restrict access to records and files containing personal information to those who need such information for their job.
  • Assign unique identifications and passwords, which are not the vendor supplied default to any person with access.

As well, the security program must include:

  • Encryption of all transmitted records and files containing PI which will travel across public networks or wirelessly.
  • Reasonable monitoring of systems for unauthorized use of or access to personal information.
  • Encryption of all personal information stored on laptops or other portable devices.
  • Require a reasonably up-to-date firewall protection and operating system security patches for systems containing personal information which are connected to the Internet.
  • Reasonably up-to-date versions of system security software which must include malware protection with reasonably up-to-date patches and virus definitions, or a version of such software that can still be supported with up-to-date patches and virus definitions, and is set to receive the most current security updates on a regular basis.
  • Education of employees on the proper use of the computer security system and personal information security.

As you can see, I saved the best for last. This law, just like the one from the state of Nevada, can have a national or international reach. Now I didn’t write all of this for you to panic about. I feel that these three laws serve as a good motivation for any business to improve their IT security and IT policies in general. Additionally, these three laws in combination provide a great framework that any business could build their IT security upon. Security is not the job of a single person, nor is it the job of a single business, instead it is a task for everyone.

The first step to building a good home is laying down a strong foundation. Similarly, the first step to building a strong and compliant IT infrastructure is finding the right platform to build upon. Kaseya was designed and built with security as the fundamental building block to its core architecture. To learn more: Click Here.

If you’re interested in learning more about PCI compliance: Click Here.

If you’re interested in another interesting compliance requirement for Law Enforcement: Click Here.

Author Harrison Depner

Building the World’s Fastest Remote Desktop Management – Part 4

Fastest Remote Control

Building the world’s fastest remote desktop management solution is a bit like building a high performance car. The first things to worry about are how fast does it go from zero to 60 and how well does it perform on the road. Once these are ensured, designers can then add the bells and whistles which make the high end experience complete.

In our first three installments in this series (Part 1, Part 2 and Part 3), we talked about the remote management technology being used to deliver speed and performance, and now we are ready to talk about remote management bells and whistles to deliver the high end experience IT administrators’ need. Kaseya Remote Control R8, which became available on September 30, adds 6 new enhancements to ensure greater security and compliance and help IT administrators resolve issues more quickly on both servers and workstations:

  1. Private Remote Control sessions:

    In many industries, such as healthcare, finance, retail, education, etc., security during a remote control session is crucial. Administrators cannot risk having the person next to the server or workstation view sensitive information on the remote screen. Kaseya Remote Control R8 allows IT administrators to establish private Remote Control sessions for Windows so that administrators can work on servers or workstations securely and discreetly.

  2. Track and report on Remote Control sessions:

    These same industries have strict compliance requirements. Remote Control R8 allows IT organizations to track and report on Remote Control sessions by admin, by machine, per month, week, day, etc., with a history of access to meet compliance requirements.

  3. Shadow end user terminal server sessions:

    Many users run terminal server sessions for which they may need assistance. Remote Control R8 lets IT administrators shadow end user terminal server sessions to more easily identify and resolve user issues.

  4. See session latency stats:

    Poor performance is often hard to diagnose. Remote Control R8 shows session latency stats during the remote control session so administrators are aware of the connection strength and can determine it’s relevance to an end user’s issues.

  5. Support for Windows Display Scaling:

    HiDPI displays are quickly becoming the norm for new devices. Remote Control R8 includes support for these display types (i.e. Retina) to allow IT administrators to remotely view the latest, high definition displays.

  6. Hardware acceleration:

    Remote management becomes much easier if one can clearly see the remote machine’s screen. Remote Control R8 enables hardware acceleration, leveraging the video card for image processing, for a sharper remote window picture while reducing the CPU overhead by 25%-50% depending on the admin’s computer hardware – “sharper” image screenshot.

Just like your favorite high-performance car, Kaseya Remote Control R8 is delivering the speed, performance and features IT administers need to obtain a high-end management experience.

Let Us Know What You Think

The new Desktop Remote Control became available with VSA R8 on September 30.

We’re looking forward to receiving feedback on the new capabilities. To learn more about Kaseya and our plans please take a look at our roadmap to see what we have in store for future releases of our products.

Author: Tom Hayes

What can The Simpsons teach us about IT security?

Simpsons IT Security

When it comes to educating your users about IT security, there are a lot of wrong ways to connect the dots between concepts and practices. Simplistic training sessions can make your users feel ignorant, gullible, or even unintelligent. From my experience, the best practices tend to be those which are honest, informative, and entertaining. When you make your lessons entertaining, you can improve the amount of knowledge your employees retain, it’s just that simple.

With that in mind, let’s take a look at one lesson which won’t fail to entertain and inform your end users. Here are five lessons about IT Security we can learn from everyone’s favorite jaundiced TV family: The Simpsons.

Quote One: “Me fail English? That’s unpossible!” – Lisa on Ice (Simpsons S6E8)

Lesson in IT security: No-one, and nothing is infallible.

No matter how adept your computer security skills are, there will always be things which catch you unaware. Viruses, malware, and social engineering are continually being refined, and as such their potency is always greater than ever before. You may speak IT as your native language, but that doesn’t mean failure is unpossible.

Malware in the wild is only half of the equation, because Shadow IT also falls under this lesson. Most of the time, when you encounter an instance of Shadow IT, it’s just a user with the best of intentions. It could be a worker trying to improve their productivity, or a “tech savvy” user “improving” the security of their system. Unfortunately there is a strong correlation between Shadow IT and malware, and, while correlation doesn’t necessitate causation, in the world of IT security there’s usually a fire if you smell smoke. No-one is infallible, and when non-IT staff are free to install apps of their own volition, the risks become compounded.

Quote Two: “You tried your best and you failed miserably. The lesson is: never try.” – Burns’ Heir (Simpsons S5E18)

Lesson in IT security: IT Security is about risk mitigation, not risk elimination.

Let me say that again, IT security is about mitigation, not elimination. This quote is a solid example of the inverse of the rule, which is what many people believe. I’ve heard numerous end-users tell me that they “don’t bother running any of those anti-virus programs”, because they “used to pay for one and they got a virus anyways.”

“Anti-virus” programs, which are more accurately named “anti-malware” programs, are not infallible. The same goes for firewalls, any form of authentication, or any other IT security related product in existence. The only absolute in IT security is the absolute possibility of risk. That doesn’t mean the products do not work, in fact many are extremely effective at mitigating the risk of various attack angles, it’s just that there’s no such thing as a “silver-bullet product” which is capable of eliminating risk.

Quote Three: “Don’t worry, head. The computer will do our thinking now.” – The Computer Wore Menace Shoes (Simpsons S12E6)

Lesson in IT security: Having strong security practices does not mean that you can stop thinking about IT security.

A lot of professionals feel that automation can handle everything, including the security of their IT infrastructure. Unfortunately, that’s only a half-truth. Automation is a glorious tool for the IT professional. Mundane and advanced tasks can be automated so as to execute with more efficiency than ever before. Never again will driver updates be so strenuous a task. Unfortunately, maintaining security is less of a science, and more of an art form, and as such the human element is always critical.

Consider Cryptolocker, which has recently been seen distributing itself under the guise of a fax notification email. Short of sandboxing every internet browser across your entire network, there’s not a lot you can automate to stop this threat. If you pay attention to various security forums though, then you may have found people who had recently encountered that variant. With human intervention, you could then set up an email filter for any emails including the word “fax”, and inform your staff of the risk and how to avoid infection. When that level of automation is possible you can let the computer do your thinking, until that time though, you can’t simply assume your systems will be able to handle everything.

Quote Four: “They have the Internet on computers, now?” – Das Bus (Simpsons S9E14)

Lesson in IT security: Keeping your intranet internal and your DMZ demilitarized are no longer easy tasks.

Yes Homer, they have the internet on computers now. To be more accurate, they have the internet on everything now. Back in the day, keeping users off of unsecured connections was as easy as telling them that being caught with a personal modem in the office was a termination-worthy offense; however, with the prevalence of cell-phones and other portable devices, a far greater risk than the 2400 baud modem of yore lies in every employees pockets.

What this means is that endpoint security and security awareness training are more critical than ever before. You can’t always trust your users, but you can teach them to not trust themselves. That may sound like a candidate for “most depressing speech ever given to new employees”, but if they’re aware of the risk each of them poses to the security of your network, they may hesitate before using their smartphone to send out that confidential business information in the future.

Quote Five: “Cant someone else do it?” – Trash of the Titans (Simpsons S9E22)

Lesson in IT security: This final rule has an easy explanation. No, someone else cannot do it. IT security is everyone’s job.

This episode is one of the most memorable Simpsons episodes, and incidentally it’s also one of the most relevant lessons you can pass on to your users. How does garbage disposal tie in to IT security? Quite easily, just consider IT security like running a sanitation department.

Homer’s sanitation plan failed because of the inefficiency inherent in getting a third party to handle all of the jobs previously handled by the citizens. Why is it okay then, to have IT security be handled by a single department, or person? People take their garbage to the curb to decrease the work required of sanitation workers, it’s this collaboration that makes the process effective. It logically follows, that such collaboration would equally benefit an IT department. Minimize the work you place on your IT staff, if you bring them your security concerns, such as potential malware infections, rather than leave it to them to notice and/or figure out, then the entire process is streamlined. Work smarter and minimize the workload placed on IT’s shoulders, because, while someone else can do it, having someone else do it is extremely inefficient.

If you’re looking for even more ways to improve the efficiency of your IT staff, why not take a look at a system which offers innumerable utilities from a single pane of glass.

A properly implemented Single Sign-On solution can also drastically improve the efficiency of business. For more information on that subject: Click Here.

Author: Harrison Depner

Education and Mitigation: Improving IT Security Through User Education

School IT Security

Unless your network consists of a room full of users connecting to an unsecured consumer-grade router, the most vulnerable part of your network are your users. Technology is good at following rules consistently, while people are not. You can trust a computer not to install viruses on itself, it can be infected, but that’s not how it was designed to function. Technology may not always work the way it’s supposed to, but it’s not like the technology itself has any control over its actions. People on the other hand…Well, you just can’t trust people not to make bad decisions…

Even the Romans knew it. To err is human: Errare humanum est. -Seneca

Trusting in your users to do everything right is foolhardy; however, it’s quite possible to teach them not to trust themselves! In the field of IT security you should trust no one. Think about how much risk would be mitigated if you could pass that notion on to your users.

Would your average users stop opening random links people send them to featuring “10 cute kitten videos you have to see?” Probably not, but if we change the question a little and ask, “Would your users engage in that sort of risky behavior less often?” Then the answer becomes a definitive “yes.”

When it comes to educating your users about IT security, there are a lot of wrong ways to connect the dots. Simplistic training sessions can make your users feel ignorant, gullible, or even unintelligent. From my experience, the best practices tend to be those which are honest, informative, and relevant. Try having a brownbag lunch and discussing IT security issues that have recently received media coverage. People remember large events like when Sony was hacked, so you could work that into a lesson about why it’s dangerous to recycle passwords across websites. Make your lessons relatable and you will improve the amount of knowledge your employees retain. It’s just that simple.

Maybe this doesn’t apply to your business. Perhaps you work at an MSP where the most computer illiterate employee you have is the janitor from Elbonia who has his CIS degree printed on what looks to be a cereal box. Well, even then there’s still plenty to learn.

Work can be hectic and busy. There are always new patches to install, and break-fix work to do. After a certain point, it gets really easy to just become apathetic to the process. Well, no surprises here but, not embracing life-long learning is one of the worst possible things you can do. IT security isn’t something you can just learn and be done with, it’s a constantly changing and evolving field! You can memorize your ABCs, but the closest things to that I have seen in IT are the four cardinal rules of IT security.

Have you heard of the four cardinal rules? Probably not, because I’m sure my instructor was improvising when he taught us. That would explain why they’re pretty much the same as the four cardinal rules of gun safety. Well, here are those four rules, so read them and see if you pick up anything new!

  1. All guns are always loaded.

    Connecting things to a network is a lot like picking up a gun. It could be loaded (with malware), or be poorly manufactured, which adds the risk of it blowing up in your face. You might want to trust the ergonomic keyboards your techs brought from home, but even that can be risky.

    In short: Assume nothing, and check everything.

  2. Always point the muzzle in a safe direction.

    Patches, updates, hardware installations, this applies to everything. If you’re going to change anything on your network, don’t just plow ahead and do it. Aim those changes in a safe direction (like a test server, or non-critical system) and try things out there first. If things work well on the test server, then safely implement the changes across all systems. You wouldn’t play Russian-Roulette with your life on the line, so why would you do it with your network?

    In short: Test everything before it goes live.

  3. Keep your finger off the trigger until you are on target and ready to shoot.

    It’s good to stay on top of the most recent updates, but there’s a fine line between updating appropriately and excessively. Just because you can update to the newest beta version of Java doesn’t mean you should, and just because there’s a newer version of an OS, that doesn’t mean you need it.

    In short: Don’t change anything on the fly and don’t install anything without considering the results.

  4. Know your target, and what lies beyond.

    When changing anything, make sure you are fully aware of what it is, what it does, and what needs it. Consider what happened with the release of Windows Vista. Many businesses updated to Vista because their hardware supported it; unfortunately, a number of devices which relied on XP’s resources no longer functioned as a result. Users were scrambling to figure out why their printers, webcams, and other gadgets no longer worked, and it caused quite a headache for the people who supported those systems.

    In short: Do your research. Nothing is as modular as it seems, and updating something as innocuous as a printer could bring your network to its knees.

Above all else, always remember that you can never know too much. Keep on learning, keep reading those blogs, and keep reading those forums. You’ll never know if something you learned is relevant until you have to do it yourself.

Now, before you go looking for random lessons to train your coworkers on, let me throw one more factor into the mix. If you’re reading this blog at blog.kaseya.com, there’s a good chance that you’re likely a Kaseya customer. If you are, or you’re interested in becoming one, why not take a look at Kaseya University.

Kaseya University is a state-of-the-art training platform for Kaseya users. It utilizes an innovative blended learning approach to provide both structured and flexible access to technical product training. The Learning Center allows students to build a truly customized learning experience unique to their needs. Kaseya University is kept current with Kaseya product releases, and refreshed multiple times a year. To learn more about Kaseya University: Click here

With that knowledge you can accomplish even more from a single pane of glass.

If you want more information on IT security or just want some topic starters: Click Here

If you want a more direct approach for improving your IT Security: Click Here

Author: Harrison Depner

Kaseya Acquires Scorpion Software for Identity and Access Management

Scorpion Software

Last week Russian criminals stole 1.2 billion Internet user names and passwords, amassing what could be the largest collection of stolen digital credentials in history- CNNMoney. The credentials gathered appear to be from over 420,000 websites — both small and large. Which specific websites were impacted is yet to be disclosed but it’s likely that some “household names” are on the list and will have to deal with the resulting publicity.

Today, companies need to manage access to a growing number of websites and applications. Unauthorized access to sensitive information can cause financial losses, reputation damage, and expose companies to regulatory penalties for privacy violations. According to the Ponemon Institute Research Finding, the US per record cost of a data breach is $201. Multiply the 1.2 billion records stolen by the Russian criminals by the $201 and it is a shockingly high number. A Washington think tank has estimated the likely annual cost of cybercrime and economic espionage to the world economy at more than $445 billion — this represents a tax by criminals of almost 1 percent on global incomes.

To reduce these exposures, protecting access with the highest levels of security is crucial for IT organizations. But developing strong security requires a balance between making access difficult for hackers and easy to comply with and use for bona fide users. According to Verizon’s Data Breach Investigations Report, “The easiest and least detectable way to gain unauthorized access is to leverage someone’s authorized access”, which means passwords need to be properly managed and protected. Accordingly, IT organizations are faced with several challenges:

  • Recognizing the relentless attempts to acquire security credentials through hacking, phishing and other techniques, preventing unauthorized system access requires more than just password-based access.
  • Passwords are easily shared, guessed and stolen. Managing password access is challenging for employees and IT organizations as the number of systems requiring password access grows.
  • Managing passwords and system access requires significant IT time and resources, so a highly efficient and easy to use administration solution is necessary.
  • Solutions chosen must comply with all prevailing industry standards.

Today, Kaseya took an important step to help its customers address these challenges, with its acquisition of Scorpion Software. The Scorpion Software AuthAnvil product set provides an important addition to the Kaseya IT management solution, offering two factor authentication, single sign-on and password management capabilities.

The solution provides IT groups with:

  • An advanced multi-factor authentication solution which provides a level of security which passwords alone cannot deliver.
  • An effective single sign-on solution with easy access to all systems for employees which avoids the need for sharing or writing down of passwords.
  • Powerful and easy-to-use password management capabilities to drive efficiencies in administering password access.
  • Support for industry standards compliance and auditing including PCI, HIPPA, SOX, CJIS and other standards.

These capabilities are aimed directly at the security and efficiency challenges above, and are essential for MSPs and IT organizations to be able to effectively manage secure access to applications and ensure standards compliance.

Scorpion Software is a longtime partner of Kaseya and has already implemented an integration with Kaseya Virtual System Administrator (VSA), making it easy for existing Kaseya customers to add Scorpion Software’s unique security capabilities to their solutions. Kaseya VSA is an integrated IT Systems Management platform that is used across IT disciplines to streamline and automate IT services, and the integration of Kaseya with Scorpion Software’s AuthAnvil technologies creates an IT management and security solution unmatched in the industry.

Scorpion Software’s AuthAnvil is currently in use by over 500 MSPs around the globe, and is the only identity and access solution to provide two factor user authentication integrated with password management and single sign-on. It allows IT organizations and MSPs to quickly and easily enable and manage secure access to all applications, delivering the highest levels of security and efficiency.

With the acquisition of Scorpion Software, Kaseya continues its work to deliver a complete, integrated IT management and security solution for MSPs and mid-sized enterprises around the world. The combined solution will help IT organizations:

  • Command Centrally: See and manage everything from a single integrated dashboard.
  • Manage Remotely: Discover, manage, and secure widely distributed environments.
  • Automate Everything: Deploy software, manage patches, manage passwords, and proactively remediate issues across your entire environment with the push of a button.

I know that many Kaseya customers who are reading this blog are already Scorpion Software customers. For those who are not, I invite you to visit the Scorpion Software website to learn more and see the product for yourself at www.scorpionsoft.com. Also, for more information, don’t hesitate to reach out to your Kaseya sales representative or email AuthAnvilSales@Kaseya.com.

Author: Tom Hayes

 

Optimizing Mid-Market Virtualized Environments for Performance and ROI

Like their larger enterprise counterparts mid-market organizations have taken extensive advantage of virtualization and server consolidation. Yet despite their increasing investment in virtual server, storage and networking capabilities, they frequently fail to invest in the tools needed to truly optimize their virtualized environments for performance and ROI.

virtualized environment

Many mid-market IT operations groups find that optimizing their infrastructure to get the best returns on their investments, while simultaneously maximizing availability, is a significant challenge. Most have implemented virtualization over the past few years to reduce the number of physical servers they need, along with the associated office space, energy usage and IT staff resources. However, they frequently underutilize the virtual machines (VMs) created to avoid overloading the hosts*.

Tool sophistication and coverage

The problem doesn’t seem to be a lack of tools but rather a lack of tool sophistication and coverage. Each hypervisor, storage and network vendor offers tools for managing and optimizing the capabilities of their own technologies. While these tools provide real-time monitoring, they are not usually able to correlate information across different domains, cannot filter derivative conditions effectively and provide little information about expected norms and predictable variations. This leaves manpower strapped IT organizations the task of manually reviewing and evaluating monitoring results in order to do configuration design, capacity planning or to determine the root-cause of performance issues.

The complexity of today’s hybrid-cloud IT environments and the ever increasing demands placed on IT make it difficult for small IT teams to dedicate sufficient time to monitoring and managing activities. So despite the underutilization of server capacity, agreed to service levels are hard to maintain and IT, in fact, relies on end-users for poor performance notification! The net result for many groups is a lower virtualization ROI than anticipated, lower IT service availability and sometimes, a less than stellar IT reputation.

Advanced application monitoring

One approach to dealing with this issue is to adopt a more advanced service level monitoring solution. By aggregating individual managed elements into collections of applications, VMs, storage, networking devices and rules that represent complete IT services, it becomes possible to take a more holistic approach to performance management and ROI improvement. Such monitoring solutions not only monitor the individual components and their associated parameters, they also correlate data from all of the service components as a whole and are able to undertake trending and baselining to help proactively identify forthcoming issues as well as to eliminate predictable parameter variations as causes of concern.

By monitoring applications through virtualized servers or from cloud services while keeping track of network, storage and other infrastructure components, advanced service level monitoring solutions are also far better at preventing those complex performance issues where nothing seems to be broken, no alerts have been sent, yet users are complaining. The wide and deep purview of such solutions also allows a more comprehensive approach to root-cause analysis. Here five areas where advanced service level monitoring tools can take the hard work out of monitoring virtualized environments and help improve both performance and ROI.

  • Server over utilization and/or underutilization. Time constraints often limit the ability of mid-market IT services groups to monitor virtual and physical server utilization and the associated storage and networking resources. Examining utilization even on a weekly basis can be totally inadequate. What’s needed is a continuous monitoring capability that correlates results between different VMs running on the same server so that CPU capacity-related performance issues can be diagnosed. Application performance can also be affected by networking and storage constraints, which in turn may be caused by applications running on adjacent VMs. Server and performance optimization requires understanding not simply the peak load requirements of individual applications but also workload patterns and system demands created by multiple applications. Reports can be viewed on a weekly basis, but data should be collected continuously and saved for later analysis and review.
  • Server versus infrastructure optimization. Monitoring server compute and storage capacity is very important but performance issues are frequently associated with the volume of network traffic or of data to be processed. Typically there are trends and patterns around these that, if identified proactively, can be used to overcome performance issues before they have impact. Identifying such trends can signal the need for additional network bandwidth, improved internet connectivity, greater or faster storage capacity, more processing power etc. – investments that are far easier to justify when related to their impact on service level agreements.
  • Static versus dynamic workloads. Another challenge is to track business application performance across dynamic server environments. When system applications such as VMware’s vMotion or Storage vMotion are used, VMs can migrate dynamically from one physical server to another without service interruption, for example when DRS or maintenance modes are enabled. In simple environments it may be easy to determine where VMs (and hence applications) have migrated but in more complex environments this becomes problematic. The advantage of vMotion is that when activated it automatically preserves virtual machine network identities and network connections, updating routers to ensure they are aware of new VM locations. The challenge from the perspective of application end-to-end performance is to know which physical server is now hosting the application – particularly as the address hasn’t changed. Advanced monitoring solutions follow these migrations and, by containerizing all the infrastructure elements that make up a particular IT service, can take account of the dynamic changes occurring in hosting, storage and networking components.
  • Cyclical, erratic and variable workloads and traffic patterns. Optimizing server consolidation is relatively straightforward when application workloads are consistent over time. However many applications place highly variable, cyclical or erratic demands on server, storage and networking components making it more likely that resources are sub-optimized in favor of simplicity and time. Advanced service level monitoring solutions are able to analyze the patters of usage and baseline the results to provide a more granular view which can be used to better take advantage of available resources and avoid unnecessary alerts. For example, a payroll application that requires significant resources prior to the end of each pay period might be pared with a finance application that needs to run after orders have been taken at the end of each month. Similarly it may make sense to pair development related activities with test activities, assuming that development and testing are done in series not in parallel. Advanced monitoring can help identify not only the processing capacity requirements and patterns but also those of storage and bandwidth so that all factors can be taken into account when optimizing resource allocation and setting thresholds.

virtualized environment

  • Root-cause analytics and meeting/reporting on SLAs. Optimization is an important goal to maximize the virtualization ROI but what most users care about is IT service availability and performance. As with all things complex, problems will occur. The challenge is to be able to resolve them as quickly as possible. Advanced service level monitoring solutions help because they are able to pin-point problem areas and then drill-down, through dashboard screens, to rapidly identify root-causes. Because they are able to look across every element of the infrastructure, they can identify interactions between different components to determine cause in ways that discrete management systems cannot. In addition, the ability to track and trend parameters of components that make up each IT service provides a proactive mechanism able to predict likely performance issues or SLA violations in advance. This provides IT Ops with reports that can be shared with management and users to justify any changes or additional investments needed.

Advanced service level performance management tools have affordable starting prices and offer significant ROI themselves by increasing the return from virtualization and allowing SLAs to be met and maintained. Add speeding mean time to problem resolution and freeing IT resources to undertake more productive activities and their value is very significant.

By helping the IT departments of mid-sized companies optimize their virtualized environments, Kaseya’s advanced monitoring solution, Traverse, supports SLA mandates and frees in-house IT staff to better respond to business requests. It also provides detailed intelligence that IT can use to add strong value in conversations regarding business innovation.

Learn more about how Kaseya technology can help. Read our whitepaper, Solving the Virtualized Infrastructure and Private Cloud Monitoring Challenge.

References:

* Expand Your Virtual Infrastructure With Confidence And Control

Author: Ray Wright

Page 1 of 2312345»1020...Last »
-->