Chances are, in an average day, you are not accomplishing as many tasks as you would like… and neither are your colleagues or your employees. What is mystifying about that statement is that it seems today’s workforce is putting in more hours and more effort than ever before coinciding with an increased adoption of IT devices and applications designed to improve user productivity. In fact, this has been a key driver for organizations to enable workforce mobility – to provide flexibility in accessing business IT resources (applications, data, email, and other services) from any device at any location at any time in order to improve overall business performance. But even the most accomplished business professionals must admit there are days when little gets done despite herculean efforts.
When was the last time an employee left your company?
Was it one month ago? Two?
Gone are the days of the lifelong career. Sure, if you work in education there’s the possibility of tenured professors, but for the average MSP there’s no such thing, and as such there is a significant amount of employee turnover. No matter how hard you try to retain your employees, some are going to be taken from you, and some of those employees are bound to be technicians.
It’s always sad whenever a technician leaves a company, but the IT security risk their departure leaves behind can linger even longer. You can lock their personal accounts after they leave and have them return their keycards, but you can’t remove all knowledge of you and your clients systems, applications, networks, and the associated usernames and passwords from their minds.
Now consider the ever increasing risk of a data breach, and the value of your clients’ data.
Your clients expect that, along with whatever other services you provide, you will help protect them from the risk of a breach, yet every time a technician leaves your company a set of keys to unlock your clients’ secured systems is being released into the world. Many businesses would be bankrupted by even a single breach, and your ex-employees have the means of walking casually past their security and into their systems. How do you think your clients would feel if they knew that?
As a business working in IT, the security of all systems, your clients’ and your own, must be at the forefront of your focus. When it comes to passwords, you need to have a plan in place which accounts for technicians leaving your company. Many MSPs I’ve seen lack such a plan, and that runs afoul of the oldest IT truism “always be prepared”. To be well prepared, there are three critical features your plan needs to work successfully…
Your system, no matter how it’s set up, absolutely needs some auditing functionality. This allows you to check:
- Who has accessed certain passwords, and when.
- If the stored passwords are on par with any complexity or compliance requirements.
- If the stored passwords are accurate and actually match the ones being used.
- Who the contact with authority is, should something go wrong.
No technician should ever need to know every single password at any given time. Access control allows you to restrict that access to need-to-know only. The most common way of accomplishing this is be enacting a role-based access model, where users in certain roles have access to certain passwords. At the minimum your system should allow you to:
- Control who can access certain passwords.
- Control what access a user has to passwords (read-only, write-only, hidden, etc.)
- Securely store the passwords in a central location, while providing access to virtually everywhere.
An excel spreadsheet just won’t cut it for this requirement. Your system needs to be capable of doing most of these tasks automatically. If you tried to do this all manually, the work required would likely be a full-time job of its own. Your system should be able to automate all of the requirements for auditing and access control, while simultaneously being able to:
- Automatically change and update passwords on a set schedule.
- Inform those in authority when a password needs changing that cannot be automated.
- Automatically enter passwords for users who only need it to log in.
Now, a lot of these requirements sound hard to fulfill. And they are, should you try to set this up yourself. That’s just the thing though, if you were solving for the problem of malware, you wouldn’t design your own in-house antivirus. I mean, you might rebrand some open source solution, but that never ends well.
The same method you use to solve for viruses, email, or any other software requirement, can be applied to password management. Let someone else build the tools, so you don’t have to. You don’t need to invent your own password management system, you just need a password management solution.
While you’re looking for a password management solution, let me throw one more factor into the mix. If you’re reading this blog at blog.kaseya.com, there’s a good chance that you’re likely a Kaseya customer. If you are, or you’re interested in becoming one, make sure that the solution you choose supports a Kaseya integration. That way you can accomplish even more from a single pane of glass.
If you want more information on what you need from a password management system: Click Here
If you want to know what I would recommend as a password management system: Click Here
Author: Harrison Depner
My last blog post discussed IT complexity and new challenges from cloud, mobility and big data which are key drivers of IT Automation. These new challenges make it hard for IT administrators to do their jobs, without increasing the level of automation. The post identified the key requirements for an automation solution, from out-of-the box functionality to policy-based management to community sharing of innovative implementations, noting that not all automaton solutions are created equal. To help crystalize the differences and the possibilities, this post provides a set of examples of each type provided by Ben Lavalley, our automation expert here at Kaseya.
In a strong automation tool, basic automation capabilities should come out-of-the-box ready to deploy. IT administrators can obtain immediate time saving and efficiency with little configuration effort. Examples include:
- Automate actions based on monitoring of specific workstations. Monitor and create a dashboard view to identify workstations and their status. Then apply policy management to automate routine maintenance. Maintenance may include disk defrag, disk cleanup, browser history cleaning, and other actions.
- Automate patch management with server/workstation policies for Windows patching. Configure automated patch approval and reboot settings for servers and workstations, using policy management for set-and-forget patching.
- Automate third party application updates. Configure application deploy and update policies to keep third party applications up-to-date. IT administrators don’t need to create scripts to update Adobe, browsers, etc.
- Automate Auditing. Run reports on machines with low memory, or open network file shares, or other characteristics, so that corrective action can be taken.
IT administrators can deploy more advanced automation based on common agent and other procedures. Examples include:
- Configure Service Desk for automated remediation of monitoring alerts. Run service or machine restarts to try to resolve a reported issue. In addition, collect diagnostic information from the offending system and add the results of the diagnosis directly into the notes of the ticket, so technicians have the valuable information they need to address the root cause of the problem more quickly.
- Use policy-based automation for select servers. Audit server roles, e.g., Exchange, Sequel, Controller, etc., with dashboard views that have been filtered for location and server type, then create a policy (using policy management) that applies on-going monitoring and reporting based on system attributes.
- Automate the end-user portal. Customize and automate the end-user portal (via the management agent), to help end users deal with basic issues. Publish bulletins, “how-to” information, etc., and provide procedures for end-users to run on their own machines for self-help.
- Establish policy-based automation for application management. Set a policy for applications that start-up automatically, then detect for non-compliance to policy. Non-compliant applications can also be removed automatically, if desired, to improve workstation performance and remove potential security issues.
Talented IT administrators like to get creative, and good automation solutions provide the tools to do so. Creative solutions are usually built using some combination of out-of-the-box capabilities along with light scripting. Examples include:
- Stolen laptop recovery. Automate the capture of desktop screenshots and even pinpoint the geographic location of the laptop with wireless network collection (using Google location APIs). It can result in a very surprised thief being apprehended in a coffee shop, for example.
- Automate email, e.g., Exchange server, Quality of Service (QoS) monitoring. Run a regular email test to proactively test that a mail server can send and/or receive mail.
- Clean up the “bloatware”. Establish an approved workstation configuration, detect deviations, and automatically clean-up the “bloatware”. Patrick Magee, from Howard Hughes Corporation, has reduced help desk tickets by 50% with this automation solution.
Regardless of the size of your business, you can improve operational efficiency and productivity through IT automation. Moreover, reducing human involvement wherever possible frees up the IT team to deal with the new challenges posed by cloud, mobility and big data. In harnessing these new technologies, the IT team becomes a partner to the organization, helping to drive business success.
For more information on Kaseya automation capabilities, visit our IT Automation website: http://www.kaseya.com/features/kaseya-platform/it-automation.
“Big Data” has been on most IT folks’ radar screens for some time but new data suggests the time has come for mid-market companies to do some serious thinking about the implications.
It’s over 400 years since Galileo, hearing of the invention of the telescope, rushed to create his own version to sell to the commanders of the Venetian navy. He immediately understood the value of being able to more quickly recognize distant ships coming into harbor or gain advance information about the capabilities of enemy craft at sea.
The desire to gain advantage by acquiring and processing information more quickly has arguably been one of the biggest drivers behind the evolution of “IT” throughout the time since Galileo’s first x10 telescope. Obviously a significant business or military advantage comes from having better knowledge and insight than your adversaries. Knowing what your customer’s want and being able to more quickly satisfy them also creates competitive advantage. Processing orders, handling inventory, raw materials purchasing, invoicing and payments…..all garner greater benefit by being done faster.
Why is this important to today’s mid-market IT organizations? And what’s it got to do with “big data”? The latest research from EMA* suggests that the early adopters of big data technologies are moving their projects into production – over half of the projects studied are either in a full or pilot production phase. Survey respondents are finding that big data programs are able to aid real-time decision making. Big data is enabling these companies to mine information from previously hard to analyze data sets (like ships a long way off at sea) and to use it for better outcomes and, ultimately, competitive advantage.
One example is the healthcare organization that analyzed patient medical records in real time to reduce the risk of prescribing harmful medications to inpatients, based on their histories and current symptoms. Another is the restaurant loyalty and rewards program operator who provides real-time program analysis data to restaurant chain customers so they can replicate successful marketing programs quickly and identify poorly performing restaurant locations at the earliest juncture.
The list of industries and use cases for big data is large and growing and the days of pure experimentation are beginning to wane. The inference is that big data will be the next wave of competitive development that speeds the availability of critical business data, disrupts business models and changes the competitive landscape.
The implications for mid-market IT organizations are immense. A key imperative in resource constrained businesses is to free up time to allow for the necessary big data discussions, explorations and innovations. Big data projects cannot be defined or driven by IT alone. It’ll take extra time to develop the business knowledge and relationships required for success. IT must find ways to reduce the time spent on day-to-day operations in order to deliver on both operational excellence and business innovation expectations. To succeed IT must:
- Look to third parties to help with basic tasks and cloud-based services.
- Reduce the number of different tools and systems used – pick the best and most comprehensive – to reduce the burden of dealing with multiple vendors, upgrades, trainings and support efforts.
- Optimize for business growth not just around the IT budget.
Here are 6 responsibilities that tomorrow’s IT department must make time for:
- Identifying opportunities. In the EMA study over 40% of funding came from finance, sales and marketing. The finance department was a major sponsor in the retail, healthcare and manufacturing segments while IT was the largest sponsor in the Public Services sector. Discussions with other functions will help identify key big data opportunities.
- Obtaining funding. Obtaining funding means developing an implementable strategy and cost effective plan that leverages current infrastructure investments and outside capabilities. Funding for projects that truly have a strong business impact will likely come from senior management as well as other functions.
- Defining and developing applications. Big Data initiatives require complex processing. To derive the most from large volumes of “unstructured” or “incomplete” data requires more complex rules and advanced predictive analytics, possibly even the use of natural language processing. In addition, analytical results will need to be built in to existing processes and workloads in order to meet the requirement for speedier decision making and competitive advantage.
- Manage pilot programs. Despite the fact that big data approaches are maturing, for those who have yet to start, the challenges are considerable. Early adopters spent more time on data management issues than analytics and adjusting existing business processes. Later adoptees may be able to learn from the early experience and move more quickly by piloting in unfamiliar areas.
- Design “big data” architecture. Adding new data to a traditional structured database is quite simple in comparison to creating an architecture that enables consistent real-time analysis of data from multiple sources, each potentially with a different structure, format, update frequency etc. Ultimately IT will need to redesign the current IT infrastructure. Regardless of where the resultant systems reside, big data represents a major activity for IT going forward.
- Prepare to take a leadership role. As has been indicated big data programs are complex. Opportunities might be identified from across the organization but it’s clear that IT needs to take the leadership role when it comes to strategy, planning, design, development and implementation.
Just as the telescope had a profound impact on the speed with which information became available when it first appeared, big data is starting to have a similar, if not greater, impact. And while large enterprises may have deeper pockets to leverage the capabilities it is mid-sized businesses that are at greater risk, if they ignore the possibilities.
By helping the IT departments of mid-sized companies meet their SLA mandates, Kaseya’s advanced monitoring solution, Traverse, helps free in-house IT staff to better respond to business requests and provides detailed intelligence that IT can use to add strong value in conversations regarding business innovation.
Learn more about how Kaseya technology can help. Read our whitepaper, Solving the Virtualized Infrastructure and Private Cloud Monitoring Challenge.
Are you a harried, yet innovative IT Administrator? With increased IT complexity, driven by cloud, mobility, and big data, it is no wonder that IT administrators are working harder than ever, but still having trouble keeping up. Yet, I hear story after story about the creative, innovative approaches that IT admins are taking to address these new challenges. Usually these approaches involve automation. In fact, IT automation needs to be part of every MSP and IT organizations’ plans to deal with the increased IT complexity, greater workloads, and flat budgets faced by every organization in every industry, in every part of the world.
But I still encounter resistance to IT automation by those who feel that IT automation is a path to “I don’t have a job.” Many IT administrators have become expert at maintaining systems with SW updates, patches, new security releases, etc., and some believe the automation of these functions would negatively impact their job. In reality, automation frees up good IT administrators to attack new, challenging, high impact opportunities to help support the business.
Others look at IT automation as a replacement for homegrown scripts, created to automate certain functions. These home grown scripts are a source of pride and job security. Unfortunately, they are usually not well documented, need regular maintenance and only cover a subset of the many functions that could and should be automated. It is hard to implement extensive automation of the many repetitive, manual IT tasks, by creating script after script.
Fortunately, there are solutions today that provide out-of-the-box automation for the many routine, repetitive, manual core functions that most IT administrators would gladly stop doing, while at the same time provide the flexibility, interfaces and tools needed for more creative and innovative IT automation. A simple search will reveal the key vendors, however, not all automation solutions are created equally. When evaluating these automation solutions, there are five key items you should consider:
Tasks such as scheduled backups, software deployments, patches and security updates are automation capabilities that should be ready to go straight out-of-the-box. Experienced vendors will have distilled these built-in capabilities from years of working with IT service providers and corporate IT departments to understand good practice areas such as routine maintenance, software deployment, security and compliance.
Ensuring that every user and every system is being managed consistently is critical. However, with thousands of systems logging onto and off various networks in multiple—sometimes global—networks, it isn’t feasible for the IT department to manually touch every machine, ensuring it is in compliance with all of the organization’s IT policies. With policy-based automation, IT administrators can define, manage, apply and enforce IT policies across groups of machines without human intervention.
3. Flexibility and Interfaces to be Creative
Once the core, “out-of-the-box” automation has been implemented, IT administrators can now differentiate themselves and their MSP businesses or IT organizations with creative, innovative automation capabilities. Examples I have seen include problem remediation, isolation of viruses, and stolen laptop recovery. In all cases, the automation solution provided easy-to-use, flexible tools to allow the creative IT administrator to be creative.
Look for a solution with proven automation capabilities, which have been implemented successfully in a large number of customers. Successful implementation of both core and innovative automation solutions across many customers (ideally thousands), will mean that IT administrators in your organization will likely be successful as well. But be sure to speak to a few references.
A broad customer base with an active community who share automation use cases and implementations is very helpful. Look at the vendor’s community, see how it functions, and talk to community leaders to understand how the community works. IT administrators generally like to have “community” ties, and a strong community can enhance the work environment and speed automation results.
IT automation is a must for any MSP or IT organization trying to keep up with the complexity and challenges posed by cloud, mobility and big data. Choosing a proven solution with the right capabilities and community support can make the move to automation much easier, especially for the harried, but innovative IT administrator.
For more information on Kaseya automation capabilities, visit our IT Automation website: http://www.kaseya.com/features/kaseya-platform/it-automation
So many times we hear some one say that they are proactive and not reactive in their approach to support. It is an approach to systems management and support that is sound and makes sense, but getting there is tricky right? 100% proactive support is unrealistic, as are flying unicorns, but there are certainly tools and tricks to maximizing proactive work and to minimizing the load that reactive support activities place on your IT organization.
WEBINAR REPLAY: MSP Pricing Tips – Determining Optimal Margins for Your Service Offerings (originally aired March 2012)
As a sneak peak of the session, Erick’s presentation on determining optimal IT managed service margins covered:
- An introduction to key pricing factors
- How to define – and calculate – labor cost
- How to define – and calculate – overhead cost
- How to use these data to optimize profits
- How to use these data to grow your business
With the last build, we have now released a Lua script that can monitor your Microsoft server cluster. The released version will check for operational status in nodes and shared resources.
Kaseya Network Monitor v4.1 Build 7394 or higher
Microsoft Windows Server 2003/2008 Cluster
Setting up this monitor involves setting up an object that represents the cluster. Testing of individual nodes is also possible but in case the node you are testing against would go down, the script would just fail, not being able to read any values.
Last month was spent meeting with a lot of our MSP customers and discussing how our new analytics & automation module fits into their operations. Each meeting had a common recurring theme – while the first phase of their business was customer acquisition, the current phase was focusing on reducing their operational costs. Deriving higher efficiencies in their NOC using better tools was again a top priority for senior management.
A large number of the MSPs had been using open source free tools before, and as they had grown in size and operations, realizing that there is a cost to “free” which can add up pretty quickly made them switch to commercial products which focused on ease of use and lower TCO. This was a first step towards higher efficiency in their NOC.
Interoperability between the different systems becomes an important requirement. Looking at the ITIL cheatsheet, at the very least you need monitoring, service desk, inventory and a billing system. Having an open API, and better still, existing connectors between these different systems so as to reduce the human interaction and automate as much as possible is an important factor. Even more useful, and often overlooked, is Workflow management – when a new customer is brought on board, where are the devices created, how do the monitoring and billing systems get provisioned, and how is notification escalated between the monitoring and ticketing systems.
Automation and intelligence within each system is next on the list – how can each of the systems (monitoring, service desk, inventory, billing) provide more efficiency within their own functional areas. In helpdesk systems, being able to prioritize alarms, auto-escalation, schedules, etc. are useful features. In monitoring platforms, being able to reduce the noise and false alarms means faster time to resolution with fewer resources.
While reducing opex has always been important to management, it seems to go through its phases. But the current vendor and analyst focus on automation and analytics is a good indication that using smarter tools to reduce opex is at the top of everyone’s priority again.
With the release of Kaseya Network Monitor version 4.1, we now have two built-in ways of monitoring your VMware infrastructure. We support versions 4.1 and 5.0 of ESX/ESXi from build 7345.
With the Lua script that comes with the install package, you can easily setup monitoring of the most common health counters’ operational status. The counters that are checked are the following:
- Storage volumes
- Storage Extent
- SATA ports
- Power supply