Remotely Manage and Control Your Ever-Widening IT Environment

Big Bang Theory
According to the “Big Bang” cosmological theory, and the latest calculations from NASA, the universe is expanding at a rate of 46.2 miles (plus or minus 1.3 miles) per second per megaparsec (a megaparsec is roughly 3 million light-years). If those numbers are a little too much to contemplate, rest assured that’s really, really fast. And it’s getting faster all the time.

Does this sound a bit like the world of IT management? So maybe IT environments aren’t expanding at 46.2 miles per second per megaparsec, but with cloud services, mobile devices, and increasingly distributed IT environments, it feels like it. Things that need to be managed are further away and in motion, which means that the ability to manage remotely is crucial. IT operations teams must be able to connect to anything, anywhere to perform a full range of management tasks.

The list of things that need to be managed remotely continues to grow. Cloud services, mobile devices, new things (as in the “Internet of Things”) all need to be managed and controlled. To maximize effectiveness, remote management of this comprehensive set should be done within a single command center. Beyond central command, several core functions are needed to successfully manage remotely:

Discovery: Virtually every management vendor offers discovery, but all discovery is not created equal. The discovery capability must be able to reach every device, no matter where it is located – office, home, on-the-road, anywhere in the world. It must also be an in-depth discovery, providing the device details needed for proper management.

Audit and Inventory: It is important to know all the applications that are running on servers, clients, and mobile devices. Are they at the right version level? And for corporately controlled devices (that is, not BYOD devices), are the applications allowed at all? Enforcing standards helps reduce trouble tickets generated when problems are caused by untested/unauthorized applications. A strong auditing and inventory capability informs the operations team so the correct information can be shared and the right actions taken.

Deploy, Patch and Manage Applications: Software deployment, patch, and application management for all clients is key to ensuring users have access to the applications they need, with a consistent and positive experience. With the significant growth in mobility, the capability to remotely manage mobile devices to ensure secure access to chosen business applications, with either company-owned devices or BYOD devices, is also important. Arguably mobile devices are more promiscuous in their use of unsecure networks in coffee shops and airports etc., so it’s even more important to keep up with mobile device patch management to ensure security fixes are put in place as soon as possible.

Security: Protecting the outer layer network, the endpoints, is an important component to a complete security solution. Endpoint protection starts with a strong endpoint security and malware detection and prevention engine. Combine security with patch management to automatically keep servers, workstations and remote computers up-to-date with the latest, important security patches and updates.

Monitor: Remote monitoring of servers, workstations, remote computers, Windows Event Logs, and applications is critical to security, network performance and the overall operations of the organization. Proactive, user defined monitoring with instant notification of problems or changes — when critical servers go down, when users alter their configuration or when a possible security threat occurs — is key to keeping systems working well and the organization running efficiently.

Remote Control: IT professionals frequently need directand rapid access to servers, workstations and mobile devices securely and without impacting the productivity of the users in order to quickly remediate issues . Remote control capability must deliver a complete, fast and secure remote access and control solution even behind firewalls or from machines at home. Because seconds matter, remote control should provide near instantaneous connect times with excellent reliability, even over high latency networks.

Automation: As the world of IT becomes more and more complex, it is impossible to achieve required service levels and efficiency when repetitive, manual, error-prone tasks are still part of the process. IT managers need to automate everything that can be automated! From software deployment and patch management to client provisioning and clean-up, automation must be an integral part of a remote management solution.

Choosing management tools with strong remote management capabilities is important to achieving customer satisfaction goals, and doing more with the existing IT operations staff. Learn more about how Kaseya technology can help you remotely manage your increasingly complex IT services. Read our whitepaper, Managing the Complexity of Today’s Hybrid IT Environments.

Author: Tom Hayes

MSP Best Practice: 4 Keys to Automation

Creating Automation

The benefits of automation were lauded as far back as 1908 when Henry Ford created the assembly line to manufacture his famous “any color you like as long as it’s black” Model T. Before assembly lines were introduced, cars were built by skilled teams of engineers, carpenters and upholsterers who worked to build vehicles individually. Yes, these vehicles were “hand crafted” but the time needed and the resultant costs were both high. Ford’s assembly line stood this traditional paradigm on its end. Instead of a team of people going to each car, cars now came to a series of specialized workers. Each worker would repeat a set number of tasks over and over again, becoming increasingly proficient, reducing both production time and cost. By implementing and refining the process, Ford was able to reduce the assembly time by over 50% and reduce the price of the Model T from $825 to $575 in just four years.

Fast forward a hundred years (or so) and think about the way your support capabilities work now. Does your MSP operation function like the teams of pre-assembly line car manufacturers or have you implemented automated processes? Some service providers and many in-house IT services groups still function like the early car manufacturers. The remediation process kicks off when an order (trouble ticket) arrives. Depending on the size (severity) of the order one or more “engineers” are allocated to solving the problem. Simple issues may be dealt with by individual support staff but more complex issues – typically those relating to poor system performance or security vs. device failures – can require the skills of several people – specialists in VMware, networking, databases, applications etc. Unfortunately, unlike the hand-crafted car manufactures who sold to wealthy customers, MSPs can’t charge more for more complex issues. Typically you receive a fixed monthly fee based on the number of devices or seats you support.

So how can you “bring the car to the worker” rather than vice-versa? Automation for sure, but how does it work? What are the key steps you need to take?

  1. Be proactive – the first and most important step is to be proactive. Like Ford with Model T manufacturing, you already know what it takes to keep a customer’s IT infrastructure running. If you didn’t you wouldn’t be in the MSP business. Use that knowledge to plan out all the proactive actions that need to take place in order to prevent problems from occurring in the first place. A simple example is patch management. Is it automated? If not, as the population of supported devices grows it’s going to take you longer and longer to complete each update. The days immediately after a patch is released are often the most crucial. If the release eliminates a security vulnerability the patch announcement can alert hackers to the fact and spur them to attack systems before the patch gets installed. If that happens, now there’s much more to do to eliminate the malware and clean up whatever mess it caused. Automating patch management saves time and gets patches installed as quickly as possible.
  1. Standardize – develop a check list of technology standards that you can apply to every similar device and throughout each customer’s infrastructure. Standards such as common anti-virus and back-up processes; common lists of recommended standard applications and utilities; recommended amounts of memory and standard configurations, particularly of network devices. By developing standards you’ll take a lot of guess work out of trouble-shooting. You’ll know if something is incorrectly configured or if a rogue application is running. And by automating the set-up of new users, for example, you can ensure that they at least start out meeting the desired standards. You can even establish an automated process to audit the status of each device and report back when compliance is contravened. The benefit to your customers is fewer problems and faster time to problem resolution. Don’t set your standards so tightly that you can meet customers’ needs but do set their expectations during the sales process so that they know why you have standards and how they help you deliver better services.
  1. Policy management – beyond standards are policies. These are most likely concerned with the governance of IT usage. Policy covers areas such as access security, password refresh, allowable downloads, application usage, who can action updates etc. Ensuring that users comply with the policies required by your customers and implied by your standards is another way to reduce the number of trouble tickets that get generated. Downloading unauthorized applications or even unveted updates to authorized applications can expose systems to “bloatware”. At best this puts a drain on system resources and can impact productivity, storage capacity and performance. At worst, users may be inadvertently downloading malware, with all of its repercussions. Setting up proactive policy management can prevent the unwanted actions from the outset. Use policy management to continuously check compliance.
  1. Continuously review – even when you have completed the prior three steps there is still much more that can be done. Being proactive will have made a significant impact on the number of trouble tickets being generated. But they will never get to zero – the IT world is just far too complex. However, by reviewing the tickets you can discover further areas where automation may help. Are there particular applications that cause problems, particular configurations, particular user habits etc.? By continuously reviewing and adjusting your standards, policy management and automation scripts you will be able to further decrease the workload on your professional staff and more more easily be able to “bring the car (problem)” to the right specialist.

As Henry Ford knew, automation is a powerful tool that will help you to reduce the number of trouble tickets generated and, more importantly the number of staff needed to deal with them. By reducing the volume and narrowing the scope, together with the right management tools, you’ll be able to free up staff time to help improve drive new business, improve customer satisfaction and ultimately increase your sales. By 1914 – 6 years after he started – Henry had an estimated 48% of the US automobile market!

What tools are you using to manage your IT services?

Author: Ray Wright

Manage Data, Not Devices

security incidents

I recently read Verizon’s 2014 Data Breach Investigations Report which investigated 63,437 confirmed security incidents including 1,367 confirmed data breaches across 50 organizations in 95 countries. The public sector had the highest number of security incidents, whereas the finance industry had the highest number of confirmed data breach incidents (no surprise there!). These security incidents were mostly one of the following:

  • POS Intrusions
  • Web App Attacks
  • Physical Theft/Loss
  • Miscellaneous Errors
  • Crimeware
  • Card Skimmers
  • Cyber Espionage
  • DoS Attacks

Given your industry and the size of your company, some of these may not matter to you (until they happen to you). But there are three types of security incidents that are universally applicable, especially in this age of exploding adoption of mobile devices. They are Insider Misuse, Physical Theft/Loss and Miscellaneous Errors. It just takes a single lapse in security measures for an organization, whether public, private or government, to end up in a story like this:

Iowa State DHS Data Breach – Two workers used personal email accounts, personal online storage and personal electronic devices for work purposes

Further elaborating on the “Insider Misuse” threat, the Verizon report adds that over 70 percent of IP theft cases occur within a month of an employee announcing their resignation. Such departing employees mostly steal customer data and internal financial information. This has been made easier for these employees by permitting them to use their personal devices, which walk out with them when they leave.

Continue Reading…

Three Key Monitoring Capabilities for VMware Virtualized Servers

VMware Virtualized ServersThe percentage of servers which are virtualized continues to grow, but management visibility continues to be a challenge. In this blog post we look at the three key monitoring capabilities – full metal, datastore, and performance – to give you the visibility and control you need to keep your virtualized applications performing well.

Before we start, below is a description of the information models which are important to hypervisor management:

Common Information Model

Common Information Model or CIM is an open standard that defines management and monitoring of devices and elements of devices in a datacenter.

VMWare infrastructure API

The VI API is a proprietary implementation of CIM provided by VMWare for management and monitoring of components related to the VMWare hypervisor.

Full metal monitoring

Fan status

The fan is essential for proper server function. When rack density goes up, server volume shrinks and fans need to work at higher speeds, which mean more wear and tear. A broken fan in a server can quickly cause major heat build up that affects the server and possibly neighbouring servers. The good thing is that it’s relatively easy to monitor the state of the fans. The CIM_fan class exposes a property called HealthState that contains information about the health of a fan: OK, degraded state, or failed.

PSU health

Power supply health is important to monitor. Most enterprise servers can be configured to have redundant power supplies. In addition, its good to have a spare in backup. OMC_Powersupply is a class the exposes the “HealthState” property for each PSU in your server. Just like the fan health, the PSU is determined to be OK, degraded, or failed.

Power usage

VI API can be used to measure average power usage, which gives an indication of the server utility cost. More power usage means more heat, which equals even more utility costs in the form of heat dissipation.The VI API counter power.power.average results looks like this:

VMware Virtualized Servers

Raid controller, storage volume and battery backup

Three key storage elements that you should monitor are the raid controller, storage volumes and the battery. The controller and disks seems obvious, but the battery? In many cases a high performance raid controller will have a battery to backup the onboard memory incase of a power outage. The memory on the controller is most commonly used for write back caching and when the server loses power, the battery ensures that the cache remains consistent until you restore power to the server and its content can be written to disk.

Datastore monitoring

Utilization, IOPs and latency are metrics that should be monitored and analyzed together. When you have performance problems in a disk subsystem, an “ok” latency can tell you to go and look for problems with IOPs. High utilization can tell you why you may not get the expected IOPs out of the system and so on.

Utilization

The utilization can be calculated using the capacity and freespace properties of the DatastoreSummary object.

IOPs

IO operations per second can be monitored using a VI API datastore.datastoreIops.average counter; which provides an average of read and write io operations.

Latency

Latency can be measured using datastore.totalWriteLatency.average and datastore.totalReadLatency.average counters. They will show you average read and write latency for the whole chain, which includes both kernel and device latency.

Performance monitoring

CPU

Threads scheduled to run on a CPU can either be in two states: waiting or ready. Both of these states can tell a story about resource shortage. The lesser evil of the two is the wait state, which indicates that the thread is waiting for an IO operation to complete. This can be as simple as waiting for a answer on a host external resource, or waiting on disk time. The more serious state is the so called “ready state” which indicate that the thread is ready to run, but there is no CPU free to server it.

VMware Virtualized Servers 3

Memory ballooning and IOPS

Memory ballooning is a process that can happen when a host experiences a low memory condition and probes the virtual machines for memory to free up. The balloon driver in each VM tries to allocate as much memory as possible within the VM (up to 65% of the available VM memory), and the host will free this memory to add to the host memory pool.

The memory ballooning counter, mem.vmmemctl.average, can show when this happens. So how can memory ballooning make a dent in your IO graph you may ask? After the host reconciles memory from VMs, these VMs can start to use their own virtual memory and start page memory blocks to disk, which is why memory ballooning may proceed a higher than normal IO operation.

Memory swapping

Ballooning may happen even if there is no issue; its a strategy for the host to make sure there is free memory for any VM to consume. Host swapping however is always a sign of trouble. There are a number of counters that you want to monitor:

mem.swapin.average
mem.swapout.average
mem.swapinRate.average
mem.swapoutRate.average

These counters show, both in cumulative and in rate, how much memory is swapped in/out. Host memory swapping is double trouble. No only does it indicate that you have a low host memory situation, it is also going to affect IO performance.

Final words

Monitoring, reporting and notification on all these metrics can be a challenge. The good news for Kaseya customers is that you can implement the monitoring described in this article using the new network monitor module in VSA 7.0, available now.

References:

Author: Robert Walker

Using Marketing and IT Automation with the Cloud to build really cool marketing campaigns

IT Automation

Marketing and IT are moving closer and closer in this current era of digital marketing. The amount of IT systems a marketer has to work with has grown exponentially. This also means the job of a marketer involves more and more close collaboration with the IT department. In this blog I’d like to share a bit of my experience as marketing automation expert for Kaseya and hopefully interest more people in joining the marketing automation movement.

Marketing automation leverages IT systems for marketing efforts in a similar way as Kaseya does for IT systems management. The idea is to automate process with the goal of reducing the manual efforts required to run marketing campaigns. There are many levels at which this can be achieved and I am proud to say that here at Kaseya, we have made immense steps over the past years in achieving higher levels of marketing automation.

It seems like ages ago (but actually isn’t) that marketing methodology was to send out manually printed batches of letters. Nowadays we have completely automated our marketing efforts. A good example of this is that we at Kaseya are constantly nurturing our customer database in our Netsuite CRM system (yes there is the first bit of cloud based IT) with our marketing automation solution, Marketo (and there is another one). Some of these marketing activities involve inviting customers for online webinars (another cloud based IT solution) or promoting the fact that we have a free cloud trial of our products available. I think that an important point to make here is that we believe that the future is in the cloud-based solutions and Kaseya is leading the industry in offering IT management cloud solutions. So we are not just promoting cloud-based IT solutions, we are also implementing them.

One of the interesting things is that this digital invasion into the marketing department has shown us that we cannot do without good IT support. Our jobs rely on the systems to be running, whether they are cloud based offerings that need to be online 24/7 or personal devices such as laptops and tablets that are used to create even more interesting campaigns. Also strong automation is making our lives much easier and as marketing automation expert, I always feel there is nothing cooler than a completely automated campaign that actually starts to live its one life and actually improves our service offering to prospects and customers. Also aspects of IT systems management such as reporting are things we as a marketing department also run into. Because just as IT wants to know what assets are being managed and how they are performing, so does marketing want insight into the running of campaigns and performance of these. Also, as with IT departments, we as marketers are also continuing to improve our process and to adapt to the changing business environment and resulting new opportunities.

Overall, the link between automated marketing, IT and the new cloud services can help marketing drive really cool marketing campaigns.

If you have any questions or would like to share your personal experiences either as a marketer or as an IT manager about marketing automation I’d love to read your comments.

MSP Best Practice: Thoughts on Being a Trusted Advisor

Kaseya

Nothing is more important for MSPs than retaining existing customers and having the opportunity to upsell new and more profitable services. The cost of customer acquisition can make it challenging to profit significantly during an initial contract period. Greater profitability typically comes from continuing to deliver services year-after-year and from attaching additional services to the contract as time goes on. Becoming a trusted advisor to customers, so that you are both highly regarded and have an opportunity to influence their purchase decisions, has always been important to this process.  However, how you become a trusted advisor and how quickly, depend on some key factors.

Challenge your customer’s thinking

According to Matthew Dixon and Brent Adamson, authors of “The Challenger Sale*”, it’s not what you sell that matters it’s how you sell it! When discussing why B2B customers keep buying or become advocates – in short, how they become loyal customers – the unexpected answer is that their purchase experiences have more impact than the combined effect of a supplier’s brand, products/services, and their value-add!

Not that these factors aren’t important – they clearly are vital too – it’s just that beyond their initial purchase needs, what customers most want from suppliers is help with the tough business decisions they have to make. This is especially true when it comes to technology decisions. The best service providers have great experience helping other, similar, companies solve their challenges and are willing to share their knowledge. They sit on the same side of the table as the customer and help evaluate alternatives in a thoughtful and considerate fashion. In short, they operate as trusted advisors.

The key is to start out as you mean to continue. How can you make every customer interaction valuable from the very outset, even before your prospect becomes a customer? Dixon and Adamson suggest the best way is to challenge their thinking with your commercial insight into their business. What might be the potential business outcomes of contracting your managed services? Yes, they will benefit from your expertise in monitoring and maintaining their IT infrastructure, but in addition, how can the unique characteristics of your services and your professional resources enable new business opportunities for them? What might those opportunities be?

Tools matter

Beyond insights gained from working closely with other customers, having the right tools can have a significant impact too. For example, there are monitoring and management tools that can be used to provide visibility into every aspect of a customer’s IT environment. But tools which are focused on a single device or technology area or are purely technical in nature, have only limited value when it comes to demonstrating support for customers’ business needs. Most customers have a strong interest in minimizing costs and reducing the likelihood and impact of disruption, such as might be caused by a virus or other malware. Being able to discuss security, automation and policy management capabilities and show how these help reduce costs is very important during the purchase process.

Customers who absolutely rely on IT service availability to support their business require greater assurance. Tools that cut through the complexity and aggregate the details to provide decision makers the information they care about, are of immense value. For example, the ability to aggregate information at the level of IT or business service and report on the associated service level agreement (SLA). Better still, the ability to proactively determine potential impacts to the SLA, such as growing network traffic or declining storage availability, so that preventative actions can take place. Easy to understand dashboards and reports showing these results can then be used in discussions about future technology needs, further supporting your trusted advisor approach.

With the right tools you also have a means to demonstrate how you will be able to meet a prospect’s service-level needs and their specific infrastructure characteristics, during the sales process. In IT, demonstrating how you will achieve what you promise is as important, if not more so, than what you promise. How will you show that you can deliver 97% service availability, reliably and at low risk to you and to your potential customer? Doing so adds to your trusted advisor stature.

Thought leadership

Unless you have a specific industry focus, most customers don’t expect you to know everything about their business – just about how your services and solutions can make a strong difference. They will prefer to do business with market leaders and with those who demonstrate insight and understanding of the latest information technology trends, such as cloud services and mobility, and the associated management implications. First, it reduces their purchase risk and makes it more likely that they will actually make a decision to purchase. Secondly it adds credibility, again enabling them to see you as a trusted advisor, one who demonstrates a clear understanding of market dynamics and can assist them in making the right decisions.

Credibility depends a lot on how you support those insights with your own products and services. Are you telling customers that cloud and mobile are increasingly important but without having services that support these areas? What about security and applications support?

Being a trusted advisor means that you, and your organization, must be focused on your customers’ success. Make every interaction of value, leverage the right tools and deliver advanced services and you will quickly be seen as a trusted advisor and be able to turn every new customer into a lasting one and a strong advocate.

Learn more about how Kaseya technology can help you create advanced managed services. Read our whitepaper, Proactive Service Level Monitoring: A Must Have for Advanced MSPs.

*Reference: The Challenger Sale, Matthew Dixon and Brent Adamson, Portfolio/Penguin 2011

What tools are you using to manage your IT services?

Author: Ray Wright

IT Best Practices: Minimizing Mean Time to Resolution

Mean Time to Repair/Resolve

Driving IT efficiency is a topic that always makes it to the list of top issues for IT organizations and MSPs alike when budgeting or discussing strategy. In a recent post we talked about automation as a way to help reduce the number of trouble tickets and, in turn, to improve the effective use of your most expensive asset – your professional services staff. This post looks at the other side of the trouble ticket coin – how to minimize the mean time to resolution of problems when they do occur and trouble tickets are submitted.

The key is to reduce the variability in time spent resolving issues. Easier said than done? Mean Time to Repair/Resolve (MTTR) can be broken down into 4 main activities, as follows:

  • Awareness: identifying that there is an issue or problem
  • Root-cause: understanding the cause of the problem
  • Remediation: fixing the problem
  • Testing: that the problem has been resolved

Of these four components awareness, remediation, and testing tend to be the smaller activities and also the less variable ones.

The time taken to become aware of a problem depends primarily on the sophistication of the monitoring system(s). Comprehensive capabilities that monitor all aspects of the IT infrastructure and group infrastructure elements into services tend to be the most productive. Proactive service level monitoring (SLM) enables IT operations to view systems across traditional silos (e.g. network, server, applications) and to analyze the performance trends of the underlying service components. By developing trend analyses in this way, proactive SLM management can identify future issues before they occur. For example, when application traffic is expanding and bandwidth is becoming constrained or when server storage is reaching its limit. When unpredicted problems do occur, being able to quickly identify their severity, eliminate downstream alarms and determine business impact, are also important factors in helping to contain variability and deploy the correct resources for maximum impact.

Identifying the root cause is usually the biggest cause of MTTR variability and the one that has the highest cost associated with it. Once again the solution lays both with the tools you use and the processes you put in place. Often management tools are selected by each IT function to help with their specific tasks – the network group will have in-depth network monitoring capabilities, the database group database performance tools, and so on. These tools are generally not well integrated and lack visibility at a service level. Also correlation using disparate tools is often manpower intensive, requiring staff from each function to meet and to try to avoid the inevitable “finger-pointing”.

The service level view is important, not only because it provides visibility into business impact, but also because it represents a level of aggregation from which to start the root cause analysis. Many IT organizations start out by using open source free tools but soon realize there is a cost to “free” as their infrastructures grow in size and complexity. Tools that look at individual infrastructure aspects can be integrated but, without underlying logic, they have a hard time correlating events and reliably identifying root cause. Poor diagnostics can be as bad as no diagnostics in more complex environments. Investigating unnecessary down-stream alarms to make sure they are not separate issues is a significant waste of resources.

Consider the frequently cited cause of MTTR variability – poor application performance. In this case there is nothing specifically “broken” so it’s hard to diagnose with point tools. A unified dashboard that shows both application process metrics and network or packet level metrics provides a valuable diagnostic view. As a simple example, a response time application could send an alert that the response time of an application is too high. Application performance monitoring data might indicate that a database is responding slowly to queries because the buffers are starved and the number of transactions is abnormally high. Integrating with network netflow or packet data allows immediate drill down to isolate which client IP address is the source of the high number of queries. This level of integration speeds the root cause analysis and easily removes the finger-pointing so that the optimum remedial action can be quickly identified.

Once a problem has been identified the last two pieces of the MTTR equation can be satisfied. The times required for remediation and testing tend to be far less variable and can be shortened by defining clear processes and responsibilities. Automation can also play a key role. For example, a great many issues are caused by miss-configuration. Rolling back configurations to the last good state can be done automatically, quickly eliminating issues even while in-depth root-cause analysis continues. Automation can plays a vital role in testing too, by making sure that performance meets requirements and that service levels have returned to normal.

To maximize IT efficiency and effectiveness and help minimize mean time to resolution, IT management systems can no longer work in vertical or horizontal isolation. The inter-dependence between services, applications, servers, cloud services and network infrastructure mandate the adoption of comprehensive service-level management capabilities for companies with more complex IT service infrastructures. The amount of data generated by these various components is huge and the rate of generation is so fast that traditional point tools cannot integrate or keep up with any kind of real time correlation.

Learn more about how Kaseya technology can help you manage your increasingly complex IT services. Read our whitepaper, Managing the Complexity of Today’s Hybrid IT Environments

What tools are you using to manage your IT services?

Author: Ray Wright

Using IT Management Tools to Deliver Advanced Managed Services

IT Management Tools

The managed service marketplace continues to change rapidly, particularly as more and more businesses are adopting cloud-based services of one type or another. The cloud trend is impacting managed service providers in numerous ways:

  • Cloud migration increases the potential for consulting projects to manage the transition but reduces the number of in-house servers/devices that need to be monitored/managed, impacting retainer-based revenues.
  • Service decisions are more heavily influenced by line-of-business concerns as customers shift spending from capital to operating expenses.
  • Service providers need be able to monitor and manage application-related service level agreements, not just infrastructure components.
  • Managing cloud-based applications is less about internal cloud resources and their configuration (which are typically controlled by the cloud provider) and more about access, resource utilization and application performance.

To address these challenges and be able to compete in a marketplace where traditional device-based monitoring services are becoming commoditized, MSPs must create and sell more advanced services to meet the changing needs of their customer base. The benefits of delivering higher value services include greater marketplace differentiation, bigger and more profitable deals, and greater customer satisfaction.

Advanced services require the ability to deploy a proactive service level monitoring system that can monitor all aspects of a customer’s hybrid cloud environment including the core internal network infrastructure, virtualized servers, storage and both internal and cloud-based applications and resources. Predictive monitoring services help ensure uptime and increased service levels via proactive and reliable notification of potential issues. Advanced remedial actions should include not only support for rapid response when the circumstances warrant but also for regular customer reviews to discuss longer term configuration changes, additional resource requirements (e.g. storage), policy changes and so on, based on predictive SLA compliance trend reports.

Beyond advanced monitoring, professional skill sets are also very important particularly when it comes to managing new cloud-based applications services. To be able to scale to support larger customers requires tools that can help simplify the complexity of potential issues and speed mean time to resolution. If every complex issue requires the skills of several experts – server, database, network, application etc., it will be hard to scale your organization as your business grows. Having tools that can quickly identify root-causes across network and application layers is vital.

Cloud customers are increasingly relying on their managed service providers to “fix” problems with their cloud services, whether it be performance issues, printing issues, integration issues or whatever. Getting a rapid response from a cloud service provider who has “thousands of other customers who are working just fine” is hard to do, especially as they know problems are as likely caused by customer infrastructure issues as by theirs. Learning tools help MSPs capture and share experiences and turn them into repeatable processes as you address each new customer issue.

Automation is another must have for advance service providers. Again, scalability depends on being able to do as much as possible with as few resources as possible. For infrastructure management, automation can help with service and application monitoring as well as network configuration management. Monitoring with application-aware solutions is an attractive proposition for specific applications. For the rest, it helps to be able to establish performance analytics against key performance indicators and diagnostic monitoring of end-user transaction experiences. For network management, being able to quickly compare network configurations and revert back to earlier working versions is one of the fastest ways to improve mean time to repair. Automating patch management and policy management for desktops and servers also results in significant manpower savings.

Finally, tools which help manage cloud service usage are also invaluable as customers adopt more and more cloud services. Just as on premise license management is an issue for larger organizations so cloud service management, particularly for services such as Office 365, is also an issue. Access to email accounts and collaborative applications such as SharePoint, is not only a security issue it’s also a cost and potentially a performance issue.

Developing advanced services requires a combination of the right skills, resources, processes and technology; in effect, a higher level of organizational maturity. Larger customers will tend to have developed greater levels of process and IT maturity themselves, in order to be able to manage their own growing IT environment. When they turn to a managed service provider for help they will be expecting at least an equivalent level of service provider maturity.

Having the right tools doesn’t guarantee success in the competitive MSP marketplace but can help MSPs create and differentiate more advanced services, demonstrate the ability to support customer-required SLAs, and scale to support larger and more profitable customers.

Learn more about how Kaseya technology can help you create advanced managed service. Read our whitepaper, Proactive Service Level Monitoring: A Must Have for Advanced MSPs.

What tools are you using to create advanced managed services?

Author: Ray Wright

Deliver Higher Service Quality with an IT Management Command Center

IT Mission Control

NASA’s Mission Control Center (MCC) manages very sophisticated technology, extreme remote connections, extensive automation, and intricate analytics – in other words, the most complex world imaginable. A staff of flight controllers at the Johnson Space Center, manages all aspects of aerospace vehicle flights.  Controllers and other support personnel monitor all missions using telemetry, and send commands to the vehicle using ground stations. Managed systems include attitude control and dynamics, power, propulsion, thermal, orbital operations and many other subsystems. They do an amazing job managing the complexity with their command and control center. Of course, we all know how it paid off with the rescue of Apollo 13. For sure all astronauts appreciate the high service quality MCC provides.

The description of what they do sounds a bit like the world of IT management. While maybe not quite operating at the extreme levels of NASA’s command center, today’s IT managers are working to deliver ever higher service quality amid an increasingly complex world. Managing cloud, mobility, and big data combined with the already complex world of virtualization, shared infrastructure and existing applications, makes meeting service level expectations challenging to say the least. And adding people is usually not an option; these challenges need to be overcome with existing IT staff.

As is true at NASA, a centralized IT command center, properly equipped, can be a game changer. Central command doesn’t mean that every piece of equipment and every IT person must be in one location, but it does mean that every IT manager have the tools, visibility and information they need to maximize service quality and achieve the highest level of efficiency.  To achieve this, here are a few key concepts that should be part of everyone’s central command approach:

Complete coverage: The IT command center needs to cover the new areas of cloud (including IaaS, PaaS, SaaS), mobility (including BYOD), and big data, while also managing the legacy infrastructure (compute, network, storage, and clients) and applications. Business services are now being delivered with a combination of these elements, and IT managers must see and manage it all.

True integration: IT managers must be able to elegantly move between the above coverage areas and service life-cycle functions, including discovery, provisioning, operations, security, automation, and optimization. A command center dashboard with easy access, combined with true SSO (Single Sign-On) and data level integration, allows IT managers to quickly resolve issues and make informed decisions.

Correlation and root cause: The ability to make sense of a large amount of alerts and management information is mandatory. IT managers need to know of any service degradation in real-time before a service outage occurs together with the root cause. Mission critical business services are most often based on IT services, so service uptime needs to meet the needs of the business.

Automation: As the world of IT becomes more and more complex, it is impossible to achieve required service levels and efficiency when repetitive, manual, error-prone tasks are still part of the process. IT managers need to automate everything that can be automated! From software deployment and patch management to client provisioning and clean-up, automation is the key to higher service quality and substantial improvement in IT efficiency.

In addition to these requirements for central command, cloud-based IT management is now a very real option. With the growing complexity of cloud, mobility, and big data, along with the ever increasing pace of change, building one’s own command center is becoming more challenging. IT management in the cloud may be the right choice, especially for MSPs and mid-sized enterprises which do not have the resources to continually maintain and update their own IT management environment. Letting the IT management cloud provider keep up with the complexity and changes, while the IT operations team focuses on delivering high service quality, can drive down TCO, improve efficiency, and increase service levels.

Author: Tom Hayes

Building the World’s Fastest Remote Desktop Management – Part 3

ICE (Interactive Connectivity Establishment)

In previous installments of this series, we went over some key technologies used for the new Kaseya Remote Control solution, and some of the features that make it so fast.

But possibly the most important part of getting a fast and reliable Remote Control session is the network connectivity used under the hood. In this post, we cover the types of connectivity used for Kaseya Remote Control, the advantages of each, and how we combine them for additional benefit.

P2P Connectivity

Peer-to-peer connectivity is the preferred method of networking between the viewer application and the agent. It generally offers high throughput and low latency – and because the viewer and agent are connected directly, it places no additional load on the VSA server.

Kaseya Remote Control uses an industry standard protocol called ICE (Interactive Connectivity Establishment) to establish P2P connectivity. ICE is designed to test a wide variety of connection options to find the best one available for the current network environment. This includes TCP and UDP, IPv4 and IPv6, and special network interfaces such as VPN and Teredo.

In addition, ICE takes care of firewall traversal and NAT hole punching. To achieve this, it makes use of the fact that most firewalls and NATs allow reverse connectivity on ports that have been used for outbound connections. This ensures no additional firewall configuration is required to support the new Remote Control solution.

ICE will select the best available P2P connection based on the type of connectivity, how long each connection takes to establish, and a variety of other factors. In practice, this means you will usually get TCP connectivity on local networks, UDP connectivity when crossing network boundaries, and VPN connectivity when no other options are available.

However, testing a wide variety of connectivity options can take several seconds – and in some network environments, it may not be possible to get a P2P connection on any network interface. This brings us to…

Relayed Connectivity

As an alternative to P2P connectivity, Kaseya Remote Control also uses connections relayed through the VSA. Relayed connections are quick to establish and unlikely to be affected by firewalls or NAT devices. They also tend to be more stable over long periods of time, especially relative to P2P connections over UDP.

In practical terms, a relayed connection is made up of outbound TCP connections from the viewer and agent to the VSA, where they are linked up for bidirectional traffic forwarding.

To minimize the network impact, relayed connections from the agent use the same port on the VSA as the agent does for checkins. This means that anytime an agent can check in, it will also be able to establish a relay connection. Conversely, on the viewer side, relayed connections will use the same port on the VSA as the browser uses to view the VSA website: If one works, so will the other.

Combining P2P & Relayed Connectivity

It’s clear that P2P and relayed connectivity both have their distinct advantages, so it would be a shame to settle for just one or the other. To get the best of both worlds, the new Kaseya Remote Control uses both types of connectivity in parallel. In particular:

  • When a new Remote Control session starts, we immediately attempt to establish both types of connectivity.
  • As soon as we get a connection of any type, the session starts. Typically, relayed connectivity will be established first, so we’ll start with that. This results in very quick connection times.
  • With the session now underway, we continue to look for better connectivity options. In most cases, a P2P connection will become available within a few seconds.
  • When a P2P connection is established, the session will immediately switch over from relayed to P2P connectivity. This is totally seamless to the user, and causes no interruption to video streaming or mouse and keyboard events.
  • Even if a P2P connection is established, the relayed connection is maintained for the duration of the session. So if P2P connectivity drops off for any reason, Kaseya Remote Control will seamlessly switch back to the relayed connection, while attempting to establish a new P2P connection in the background.

The upshot of all this is that you can have have fast connection times, high throughput, low latency, and a robust Remote Control connection, all at the same time: No compromises required!

Let Us Know What You Think

The new Desktop Remote Control will be available with VSA 7.0 on May 31. We’re looking forward to getting it into customer hands, and receiving feedback. To learn more about Kaseya and our plans please take a look at our roadmap to see what we have in store for future releases of our products.

Page 4 of 46« First...«23456»102030...Last »
-->