Archive - MSP RSS Feed

The Significant Value of MSP Advanced Monitoring Services

Kaseya Monitor

There’s no doubt that server virtualization has had a tremendous impact on the IT operations of many small and mid-sized businesses (SMBs). For example, the benefits* have included:

  • Reduced administrative costs
  • Improved data resiliency
  • Better application availability
  • Greater business agility, e.g. faster time to market
  • Increased disaster recovery readiness, and even
  • Higher profitability and business growth

However, a recent survey report from VMWare and Forrester** suggests that SMBs may not be achieving the ROI they originally expected from virtualization. It also points to the fact that while the majority of SMBs expect their virtualized environments to grow, they are not able to optimize their server installations and are experiencing difficulties in meeting agreed-to IT service levels.

In particular, many SMBs are challenged to optimize the use of their existing servers. A major problem is lack of skilled resources. Partly this is due to the tight budget constraints that prevail in small and mid-sized companies. Partly it’s due to the difficulty of finding and hiring personnel with the right IT skills. The result is that there is a significant opportunity for MSPs to step into the breach and help.

The Forrester report indicates that the average SMB operates a hybrid-cloud environment. About half of their workloads are virtualized and Forrester expects further virtualization to occur, including the virtualization of strategic applications. Other research suggests that a majority of SMBs are now using public cloud services as well as private cloud services, including significant up-take of software-as-as-service (SaaS) and Infrastructure-as-a-service (IaaS) offerings. Coupled with these changes operating budgets have been moving from IT to line-of-business managers over time.

Together these factors amount to a considerable set of challenges, particularly for IT in mid-sized organizations. For example:

  • In virtualized environments applications share the processing, memory, storage and bandwidth resources made available by the host server. When one application begins to hog any of these resources performance can be impacted for the other applications. To overcome this, virtualized server loads need to be rebalanced on a frequent basis e.g. monthly. As installations grow this can be time consuming and impose unbudgeted costs for IT departments with constrained resources.
  • To provide for IT service continuity during maintenance, critical applications performance, and rapid disaster recovery, many virtualized environments support the dynamic switching of applications between servers. The benefits are significant but there is also a substantial impact on visibility. In the past when each application ran on its own server, troubleshooting was comparatively easy. With dynamic switching, knowing where an application was running at the exact instant of a fault so that root cause can be determined, can be difficult to identify.
  • Managing the performance of public cloud services is also challenging. While IaaS services, such as Amazon EC2, offer management APIs, most SaaS offerings do not provide management capabilities. The best that customers can expect from many of these services is availability guarantees. However, many SaaS applications run in the same kind of virtualized environments as their on-premise counterparts, which means they can be subject to the same kind of co-resident application instance interference. Yes they are available but the performance can definitely degrade during peak usage periods.
  • One of the expectations from virtualization was that it would free IT resources to assist business counterparts make better informed technology decisions. However, judging by the results so far, this has been hard for many SMBs to achieve. IT resources have been reduced during the economic downturn. Plus there’s an expectation that virtualization and self-service private cloud capabilities should significantly improve IT productivity. Lacking resources, IT is now often placed in a position where it’s easier to decline a request than to support it. The result is that line-of-business managers may view IT as the department of “no” versus the department of “know”.

MSPs who offer advanced monitoring services and can take on the risk of providing availability (up-time)-based SLAs are in a great position to help. Firstly, they have the skilled resources that can quickly support the virtualization growth plans of SMBs and to help them optimize their server farm installations. Secondly, they have tools which enable them to track, monitor and manage critical application service levels across the entire infrastructure, including being able to keep track of applications as they migrate dynamically between different virtual machines and different servers. Thirdly, they can provide detailed reporting and analyses to aid discussions about the infrastructure investments needed to maintain SLAs and to inform business/IT decision making.

Tools such as Kaseya Traverse support proactive service-level monitoring, enabling MSPs (and enterprise customers) to get advance warning of pending issues (such as memory/storage/bandwidth constraints) so that they can remediate potential problems before they impact service levels. In addition, by tracking business services (such as supply chain applications) at the highest level, while still being able to drill-down to the appropriate server or virtual machine, Traverse allows MSPs to quickly and accurately identify route causes even in the most complex of environments. Add to that support for public cloud APIs, predictive analytics and a powerful reporting capability, and Traverse-equipped MSPs are primed to provide valuable support for today’s mid-sized companies and their hybrid-cloud environments.

By helping the IT departments of mid-sized companies meet their SLA mandates, MSPs can help free in-house IT staff to better respond to business requests, can bolster the reputation of IT within their own organizations, and can help provide the detailed intelligence needed for IT to add strong value in conversations regarding business innovation.

Learn more about how Kaseya technology can help you create advanced managed services.
Read our whitepaper, Proactive Service Level Monitoring: A Must Have for Advanced MSPs.

What tools are you using to manage your IT services?

Author: Ray Wright

References:

* The Benefits of Virtualization for Small and Medium Businesses

** Expand Your Virtual Infrastructure With Confidence And Control

MSP Best Practice: 4 Keys to Automation

Creating Automation

The benefits of automation were lauded as far back as 1908 when Henry Ford created the assembly line to manufacture his famous “any color you like as long as it’s black” Model T. Before assembly lines were introduced, cars were built by skilled teams of engineers, carpenters and upholsterers who worked to build vehicles individually. Yes, these vehicles were “hand crafted” but the time needed and the resultant costs were both high. Ford’s assembly line stood this traditional paradigm on its end. Instead of a team of people going to each car, cars now came to a series of specialized workers. Each worker would repeat a set number of tasks over and over again, becoming increasingly proficient, reducing both production time and cost. By implementing and refining the process, Ford was able to reduce the assembly time by over 50% and reduce the price of the Model T from $825 to $575 in just four years.

Fast forward a hundred years (or so) and think about the way your support capabilities work now. Does your MSP operation function like the teams of pre-assembly line car manufacturers or have you implemented automated processes? Some service providers and many in-house IT services groups still function like the early car manufacturers. The remediation process kicks off when an order (trouble ticket) arrives. Depending on the size (severity) of the order one or more “engineers” are allocated to solving the problem. Simple issues may be dealt with by individual support staff but more complex issues – typically those relating to poor system performance or security vs. device failures – can require the skills of several people – specialists in VMware, networking, databases, applications etc. Unfortunately, unlike the hand-crafted car manufactures who sold to wealthy customers, MSPs can’t charge more for more complex issues. Typically you receive a fixed monthly fee based on the number of devices or seats you support.

So how can you “bring the car to the worker” rather than vice-versa? Automation for sure, but how does it work? What are the key steps you need to take?

  1. Be proactive – the first and most important step is to be proactive. Like Ford with Model T manufacturing, you already know what it takes to keep a customer’s IT infrastructure running. If you didn’t you wouldn’t be in the MSP business. Use that knowledge to plan out all the proactive actions that need to take place in order to prevent problems from occurring in the first place. A simple example is patch management. Is it automated? If not, as the population of supported devices grows it’s going to take you longer and longer to complete each update. The days immediately after a patch is released are often the most crucial. If the release eliminates a security vulnerability the patch announcement can alert hackers to the fact and spur them to attack systems before the patch gets installed. If that happens, now there’s much more to do to eliminate the malware and clean up whatever mess it caused. Automating patch management saves time and gets patches installed as quickly as possible.
  1. Standardize – develop a check list of technology standards that you can apply to every similar device and throughout each customer’s infrastructure. Standards such as common anti-virus and back-up processes; common lists of recommended standard applications and utilities; recommended amounts of memory and standard configurations, particularly of network devices. By developing standards you’ll take a lot of guess work out of trouble-shooting. You’ll know if something is incorrectly configured or if a rogue application is running. And by automating the set-up of new users, for example, you can ensure that they at least start out meeting the desired standards. You can even establish an automated process to audit the status of each device and report back when compliance is contravened. The benefit to your customers is fewer problems and faster time to problem resolution. Don’t set your standards so tightly that you can meet customers’ needs but do set their expectations during the sales process so that they know why you have standards and how they help you deliver better services.
  1. Policy management – beyond standards are policies. These are most likely concerned with the governance of IT usage. Policy covers areas such as access security, password refresh, allowable downloads, application usage, who can action updates etc. Ensuring that users comply with the policies required by your customers and implied by your standards is another way to reduce the number of trouble tickets that get generated. Downloading unauthorized applications or even unveted updates to authorized applications can expose systems to “bloatware”. At best this puts a drain on system resources and can impact productivity, storage capacity and performance. At worst, users may be inadvertently downloading malware, with all of its repercussions. Setting up proactive policy management can prevent the unwanted actions from the outset. Use policy management to continuously check compliance.
  1. Continuously review – even when you have completed the prior three steps there is still much more that can be done. Being proactive will have made a significant impact on the number of trouble tickets being generated. But they will never get to zero – the IT world is just far too complex. However, by reviewing the tickets you can discover further areas where automation may help. Are there particular applications that cause problems, particular configurations, particular user habits etc.? By continuously reviewing and adjusting your standards, policy management and automation scripts you will be able to further decrease the workload on your professional staff and more more easily be able to “bring the car (problem)” to the right specialist.

As Henry Ford knew, automation is a powerful tool that will help you to reduce the number of trouble tickets generated and, more importantly the number of staff needed to deal with them. By reducing the volume and narrowing the scope, together with the right management tools, you’ll be able to free up staff time to help improve drive new business, improve customer satisfaction and ultimately increase your sales. By 1914 – 6 years after he started – Henry had an estimated 48% of the US automobile market!

What tools are you using to manage your IT services?

Author: Ray Wright

MSP Best Practice: Thoughts on Being a Trusted Advisor

Kaseya

Nothing is more important for MSPs than retaining existing customers and having the opportunity to upsell new and more profitable services. The cost of customer acquisition can make it challenging to profit significantly during an initial contract period. Greater profitability typically comes from continuing to deliver services year-after-year and from attaching additional services to the contract as time goes on. Becoming a trusted advisor to customers, so that you are both highly regarded and have an opportunity to influence their purchase decisions, has always been important to this process.  However, how you become a trusted advisor and how quickly, depend on some key factors.

Challenge your customer’s thinking

According to Matthew Dixon and Brent Adamson, authors of “The Challenger Sale*”, it’s not what you sell that matters it’s how you sell it! When discussing why B2B customers keep buying or become advocates – in short, how they become loyal customers – the unexpected answer is that their purchase experiences have more impact than the combined effect of a supplier’s brand, products/services, and their value-add!

Not that these factors aren’t important – they clearly are vital too – it’s just that beyond their initial purchase needs, what customers most want from suppliers is help with the tough business decisions they have to make. This is especially true when it comes to technology decisions. The best service providers have great experience helping other, similar, companies solve their challenges and are willing to share their knowledge. They sit on the same side of the table as the customer and help evaluate alternatives in a thoughtful and considerate fashion. In short, they operate as trusted advisors.

The key is to start out as you mean to continue. How can you make every customer interaction valuable from the very outset, even before your prospect becomes a customer? Dixon and Adamson suggest the best way is to challenge their thinking with your commercial insight into their business. What might be the potential business outcomes of contracting your managed services? Yes, they will benefit from your expertise in monitoring and maintaining their IT infrastructure, but in addition, how can the unique characteristics of your services and your professional resources enable new business opportunities for them? What might those opportunities be?

Tools matter

Beyond insights gained from working closely with other customers, having the right tools can have a significant impact too. For example, there are monitoring and management tools that can be used to provide visibility into every aspect of a customer’s IT environment. But tools which are focused on a single device or technology area or are purely technical in nature, have only limited value when it comes to demonstrating support for customers’ business needs. Most customers have a strong interest in minimizing costs and reducing the likelihood and impact of disruption, such as might be caused by a virus or other malware. Being able to discuss security, automation and policy management capabilities and show how these help reduce costs is very important during the purchase process.

Customers who absolutely rely on IT service availability to support their business require greater assurance. Tools that cut through the complexity and aggregate the details to provide decision makers the information they care about, are of immense value. For example, the ability to aggregate information at the level of IT or business service and report on the associated service level agreement (SLA). Better still, the ability to proactively determine potential impacts to the SLA, such as growing network traffic or declining storage availability, so that preventative actions can take place. Easy to understand dashboards and reports showing these results can then be used in discussions about future technology needs, further supporting your trusted advisor approach.

With the right tools you also have a means to demonstrate how you will be able to meet a prospect’s service-level needs and their specific infrastructure characteristics, during the sales process. In IT, demonstrating how you will achieve what you promise is as important, if not more so, than what you promise. How will you show that you can deliver 97% service availability, reliably and at low risk to you and to your potential customer? Doing so adds to your trusted advisor stature.

Thought leadership

Unless you have a specific industry focus, most customers don’t expect you to know everything about their business – just about how your services and solutions can make a strong difference. They will prefer to do business with market leaders and with those who demonstrate insight and understanding of the latest information technology trends, such as cloud services and mobility, and the associated management implications. First, it reduces their purchase risk and makes it more likely that they will actually make a decision to purchase. Secondly it adds credibility, again enabling them to see you as a trusted advisor, one who demonstrates a clear understanding of market dynamics and can assist them in making the right decisions.

Credibility depends a lot on how you support those insights with your own products and services. Are you telling customers that cloud and mobile are increasingly important but without having services that support these areas? What about security and applications support?

Being a trusted advisor means that you, and your organization, must be focused on your customers’ success. Make every interaction of value, leverage the right tools and deliver advanced services and you will quickly be seen as a trusted advisor and be able to turn every new customer into a lasting one and a strong advocate.

Learn more about how Kaseya technology can help you create advanced managed services. Read our whitepaper, Proactive Service Level Monitoring: A Must Have for Advanced MSPs.

*Reference: The Challenger Sale, Matthew Dixon and Brent Adamson, Portfolio/Penguin 2011

What tools are you using to manage your IT services?

Author: Ray Wright

IT Best Practices: Minimizing Mean Time to Resolution

Mean Time to Repair/Resolve

Driving IT efficiency is a topic that always makes it to the list of top issues for IT organizations and MSPs alike when budgeting or discussing strategy. In a recent post we talked about automation as a way to help reduce the number of trouble tickets and, in turn, to improve the effective use of your most expensive asset – your professional services staff. This post looks at the other side of the trouble ticket coin – how to minimize the mean time to resolution of problems when they do occur and trouble tickets are submitted.

The key is to reduce the variability in time spent resolving issues. Easier said than done? Mean Time to Repair/Resolve (MTTR) can be broken down into 4 main activities, as follows:

  • Awareness: identifying that there is an issue or problem
  • Root-cause: understanding the cause of the problem
  • Remediation: fixing the problem
  • Testing: that the problem has been resolved

Of these four components awareness, remediation, and testing tend to be the smaller activities and also the less variable ones.

The time taken to become aware of a problem depends primarily on the sophistication of the monitoring system(s). Comprehensive capabilities that monitor all aspects of the IT infrastructure and group infrastructure elements into services tend to be the most productive. Proactive service level monitoring (SLM) enables IT operations to view systems across traditional silos (e.g. network, server, applications) and to analyze the performance trends of the underlying service components. By developing trend analyses in this way, proactive SLM management can identify future issues before they occur. For example, when application traffic is expanding and bandwidth is becoming constrained or when server storage is reaching its limit. When unpredicted problems do occur, being able to quickly identify their severity, eliminate downstream alarms and determine business impact, are also important factors in helping to contain variability and deploy the correct resources for maximum impact.

Identifying the root cause is usually the biggest cause of MTTR variability and the one that has the highest cost associated with it. Once again the solution lays both with the tools you use and the processes you put in place. Often management tools are selected by each IT function to help with their specific tasks – the network group will have in-depth network monitoring capabilities, the database group database performance tools, and so on. These tools are generally not well integrated and lack visibility at a service level. Also correlation using disparate tools is often manpower intensive, requiring staff from each function to meet and to try to avoid the inevitable “finger-pointing”.

The service level view is important, not only because it provides visibility into business impact, but also because it represents a level of aggregation from which to start the root cause analysis. Many IT organizations start out by using open source free tools but soon realize there is a cost to “free” as their infrastructures grow in size and complexity. Tools that look at individual infrastructure aspects can be integrated but, without underlying logic, they have a hard time correlating events and reliably identifying root cause. Poor diagnostics can be as bad as no diagnostics in more complex environments. Investigating unnecessary down-stream alarms to make sure they are not separate issues is a significant waste of resources.

Consider the frequently cited cause of MTTR variability – poor application performance. In this case there is nothing specifically “broken” so it’s hard to diagnose with point tools. A unified dashboard that shows both application process metrics and network or packet level metrics provides a valuable diagnostic view. As a simple example, a response time application could send an alert that the response time of an application is too high. Application performance monitoring data might indicate that a database is responding slowly to queries because the buffers are starved and the number of transactions is abnormally high. Integrating with network netflow or packet data allows immediate drill down to isolate which client IP address is the source of the high number of queries. This level of integration speeds the root cause analysis and easily removes the finger-pointing so that the optimum remedial action can be quickly identified.

Once a problem has been identified the last two pieces of the MTTR equation can be satisfied. The times required for remediation and testing tend to be far less variable and can be shortened by defining clear processes and responsibilities. Automation can also play a key role. For example, a great many issues are caused by miss-configuration. Rolling back configurations to the last good state can be done automatically, quickly eliminating issues even while in-depth root-cause analysis continues. Automation can plays a vital role in testing too, by making sure that performance meets requirements and that service levels have returned to normal.

To maximize IT efficiency and effectiveness and help minimize mean time to resolution, IT management systems can no longer work in vertical or horizontal isolation. The inter-dependence between services, applications, servers, cloud services and network infrastructure mandate the adoption of comprehensive service-level management capabilities for companies with more complex IT service infrastructures. The amount of data generated by these various components is huge and the rate of generation is so fast that traditional point tools cannot integrate or keep up with any kind of real time correlation.

Learn more about how Kaseya technology can help you manage your increasingly complex IT services. Read our whitepaper, Managing the Complexity of Today’s Hybrid IT Environments

What tools are you using to manage your IT services?

Author: Ray Wright

Using IT Management Tools to Deliver Advanced Managed Services

IT Management Tools

The managed service marketplace continues to change rapidly, particularly as more and more businesses are adopting cloud-based services of one type or another. The cloud trend is impacting managed service providers in numerous ways:

  • Cloud migration increases the potential for consulting projects to manage the transition but reduces the number of in-house servers/devices that need to be monitored/managed, impacting retainer-based revenues.
  • Service decisions are more heavily influenced by line-of-business concerns as customers shift spending from capital to operating expenses.
  • Service providers need be able to monitor and manage application-related service level agreements, not just infrastructure components.
  • Managing cloud-based applications is less about internal cloud resources and their configuration (which are typically controlled by the cloud provider) and more about access, resource utilization and application performance.

To address these challenges and be able to compete in a marketplace where traditional device-based monitoring services are becoming commoditized, MSPs must create and sell more advanced services to meet the changing needs of their customer base. The benefits of delivering higher value services include greater marketplace differentiation, bigger and more profitable deals, and greater customer satisfaction.

Advanced services require the ability to deploy a proactive service level monitoring system that can monitor all aspects of a customer’s hybrid cloud environment including the core internal network infrastructure, virtualized servers, storage and both internal and cloud-based applications and resources. Predictive monitoring services help ensure uptime and increased service levels via proactive and reliable notification of potential issues. Advanced remedial actions should include not only support for rapid response when the circumstances warrant but also for regular customer reviews to discuss longer term configuration changes, additional resource requirements (e.g. storage), policy changes and so on, based on predictive SLA compliance trend reports.

Beyond advanced monitoring, professional skill sets are also very important particularly when it comes to managing new cloud-based applications services. To be able to scale to support larger customers requires tools that can help simplify the complexity of potential issues and speed mean time to resolution. If every complex issue requires the skills of several experts – server, database, network, application etc., it will be hard to scale your organization as your business grows. Having tools that can quickly identify root-causes across network and application layers is vital.

Cloud customers are increasingly relying on their managed service providers to “fix” problems with their cloud services, whether it be performance issues, printing issues, integration issues or whatever. Getting a rapid response from a cloud service provider who has “thousands of other customers who are working just fine” is hard to do, especially as they know problems are as likely caused by customer infrastructure issues as by theirs. Learning tools help MSPs capture and share experiences and turn them into repeatable processes as you address each new customer issue.

Automation is another must have for advance service providers. Again, scalability depends on being able to do as much as possible with as few resources as possible. For infrastructure management, automation can help with service and application monitoring as well as network configuration management. Monitoring with application-aware solutions is an attractive proposition for specific applications. For the rest, it helps to be able to establish performance analytics against key performance indicators and diagnostic monitoring of end-user transaction experiences. For network management, being able to quickly compare network configurations and revert back to earlier working versions is one of the fastest ways to improve mean time to repair. Automating patch management and policy management for desktops and servers also results in significant manpower savings.

Finally, tools which help manage cloud service usage are also invaluable as customers adopt more and more cloud services. Just as on premise license management is an issue for larger organizations so cloud service management, particularly for services such as Office 365, is also an issue. Access to email accounts and collaborative applications such as SharePoint, is not only a security issue it’s also a cost and potentially a performance issue.

Developing advanced services requires a combination of the right skills, resources, processes and technology; in effect, a higher level of organizational maturity. Larger customers will tend to have developed greater levels of process and IT maturity themselves, in order to be able to manage their own growing IT environment. When they turn to a managed service provider for help they will be expecting at least an equivalent level of service provider maturity.

Having the right tools doesn’t guarantee success in the competitive MSP marketplace but can help MSPs create and differentiate more advanced services, demonstrate the ability to support customer-required SLAs, and scale to support larger and more profitable customers.

Learn more about how Kaseya technology can help you create advanced managed service. Read our whitepaper, Proactive Service Level Monitoring: A Must Have for Advanced MSPs.

What tools are you using to create advanced managed services?

Author: Ray Wright

Deliver Higher Service Quality with an IT Management Command Center

IT Mission Control

NASA’s Mission Control Center (MCC) manages very sophisticated technology, extreme remote connections, extensive automation, and intricate analytics – in other words, the most complex world imaginable. A staff of flight controllers at the Johnson Space Center, manages all aspects of aerospace vehicle flights.  Controllers and other support personnel monitor all missions using telemetry, and send commands to the vehicle using ground stations. Managed systems include attitude control and dynamics, power, propulsion, thermal, orbital operations and many other subsystems. They do an amazing job managing the complexity with their command and control center. Of course, we all know how it paid off with the rescue of Apollo 13. For sure all astronauts appreciate the high service quality MCC provides.

The description of what they do sounds a bit like the world of IT management. While maybe not quite operating at the extreme levels of NASA’s command center, today’s IT managers are working to deliver ever higher service quality amid an increasingly complex world. Managing cloud, mobility, and big data combined with the already complex world of virtualization, shared infrastructure and existing applications, makes meeting service level expectations challenging to say the least. And adding people is usually not an option; these challenges need to be overcome with existing IT staff.

As is true at NASA, a centralized IT command center, properly equipped, can be a game changer. Central command doesn’t mean that every piece of equipment and every IT person must be in one location, but it does mean that every IT manager have the tools, visibility and information they need to maximize service quality and achieve the highest level of efficiency.  To achieve this, here are a few key concepts that should be part of everyone’s central command approach:

Complete coverage: The IT command center needs to cover the new areas of cloud (including IaaS, PaaS, SaaS), mobility (including BYOD), and big data, while also managing the legacy infrastructure (compute, network, storage, and clients) and applications. Business services are now being delivered with a combination of these elements, and IT managers must see and manage it all.

True integration: IT managers must be able to elegantly move between the above coverage areas and service life-cycle functions, including discovery, provisioning, operations, security, automation, and optimization. A command center dashboard with easy access, combined with true SSO (Single Sign-On) and data level integration, allows IT managers to quickly resolve issues and make informed decisions.

Correlation and root cause: The ability to make sense of a large amount of alerts and management information is mandatory. IT managers need to know of any service degradation in real-time before a service outage occurs together with the root cause. Mission critical business services are most often based on IT services, so service uptime needs to meet the needs of the business.

Automation: As the world of IT becomes more and more complex, it is impossible to achieve required service levels and efficiency when repetitive, manual, error-prone tasks are still part of the process. IT managers need to automate everything that can be automated! From software deployment and patch management to client provisioning and clean-up, automation is the key to higher service quality and substantial improvement in IT efficiency.

In addition to these requirements for central command, cloud-based IT management is now a very real option. With the growing complexity of cloud, mobility, and big data, along with the ever increasing pace of change, building one’s own command center is becoming more challenging. IT management in the cloud may be the right choice, especially for MSPs and mid-sized enterprises which do not have the resources to continually maintain and update their own IT management environment. Letting the IT management cloud provider keep up with the complexity and changes, while the IT operations team focuses on delivering high service quality, can drive down TCO, improve efficiency, and increase service levels.

Author: Tom Hayes

Half of the Top-Ranked MSPs use Kaseya

MSPMentor MSP501

Kaseya has always been at the front of the pack in terms of solutions that successful MSPs use to run their businesses.  2014 is no exception. Once again, we are honored and humbled to announce that Kaseya MSPs continue to dominate the MSPMentor 501 list for 2014, with our customers claiming nearly 50% of the top 100 spots!  Congratulations to you from all of us.

The 501 companies on the list reported a 28% increase in recurring revenue from 2012 to 2013, 33% more PCs under management, and 32% more servers and network devices under management. Wow!  We are thrilled to see our customers achieve significant and sustainable growth in their businesses.  It’s a validation that MSPs CAN grow and DO grow with Kaseya.

We were happy to see a number of these top MSPs at our recent annual user conference, Kaseya Connect held earlier this month.  In addition to having the opportunity to congratulate them in person, we were able to hear firsthand about their challenges and opportunities.  And they were loud and clear. With the prevalence of cloud, mobility, virtualization and other IT trends, they need an integrated solution to manage all of IT simply, centrally and automatically.  We’re so happy that Kaseya is that solution.

For those not yet Kaseya customers, we invite you to see how and why the top MSPs power their business with us.  Register for a live demo and learn how we can help you grow your business.

 

Building The World’s Fastest Remote Desktop Management – Part 2

Kaseya Remote Control

In his earlier blog post, Chad Gniffke outlined some of the key technologies underpinning the new Kaseya Remote Control solution in VSA release 7.0. This includes using modern video codecs to efficiently transmit screen data.

Going beyond these items, the engineering team at Kaseya has looked at every aspect of the remote desktop management workflow to shave precious seconds off the time required to establish a session.

In this post, we review three changes that have a big impact on both the time to connect and the experience once connected.

Keeping it Lean

When it comes to performance, what’s sometimes more important than what you do is what you don’t do. In the new Kaseya Remote Control, we have applied this principle in several areas.

When first connecting to a new agent, downloading remote desktop management binaries to the agent will represent a substantial portion of the connect time. With the new Kaseya Remote Control in VSA 7.0, this delay has been completely eliminated: Everything needed for Remote Control is now included with the agent install itself, and is always available for immediate use.

Likewise, the time to schedule and run an agent procedure against an agent has traditionally accounted for a large portion of the time to connect. Rather than attempt to optimize this, the new Remote Control doesn’t run an agent procedure at all. Instead, it maintains a persistent connection to the VSA server over a dedicated network connection that’s always on, and always available to start Remote Control immediately.

Making it Parallel

Establishing a Remote Control session involves a surprising number of individual steps. In broad strokes, we need to:

  • Launch the viewer application.
  • Establish a connection from the viewer to the VSA server.
  • Perform encryption handshakes to ensure each connection is secure.
  • Send Remote Control session details to the agent.
  • Wait for the user to accept the Remote Control session (if required by policy).
  • Establish relayed connectivity.
  • Collect network candidates for P2P connectivity.
  • Transmit P2P connection candidates over the network.
  • Perform P2P connectivity tests.
  • Select the best available network connection to start the session on.

But it turns out most of these steps can be performed in parallel – at least to some degree. For example, the information required to start a P2P connection to an agent can be collected while establishing an encrypted connection to the VSA. If user acceptance is required, a complete P2P connection can usually be negotiated long before the user approves the session. This dramatically reduces the overall time required to establish each session.

Utilizing the Hardware

Once connected to a remote agent, Kaseya Remote Control will start streaming full screen video data over the network connection, and drawing it to the viewer’s screen. The video codec under the hood ensures that a minimal amount of data is sent over the network, especially if nothing much is changing on screen. But on the viewer side, we still need to render the entire remote image to screen, at up to 20 frames per second. This can result in increased CPU load and battery drain on the viewer machine.

To reduce the impact on the viewer side, the new Kaseya Remote Control in VSA 7.0 now uses graphics hardware to scale and render the remote video content to screen. Modern graphics cards can perform these operations very efficiently, resulting in a reduced drain on system resources. This will be especially obvious when maintaining long-running connections to multiple remote agents.

Diving Deeper

These items represent a handful of the many changes going into our new Kaseya Remote Control to speed up connect times, and improve the experience once connected.

To find out more, stop by the Product Lab at Kaseya Connect in Las Vegas next week! And watch this space for a future post about the brand new P2P connection establishment technology that forms the backbone of our next generation Kaseya Remote Control.

Remote Desktop Management – Overcoming Browser Plugin Issues

Remote Plugin

The ability to efficiently manage remotely is a primary driver selecting an IT Management solution. For the last few years, we have relied heavily on browser-side plugins to deliver real-time remote control and behind the scenes remote desktop management tools via our Live Connect platform. Browser-side plugins have long provided us with an efficient way to deliver feature rich web applications that work across all major browser platforms. This efficiency is now in jeopardy due to changing attitudes on enabling plugins.

Browser manufacturers are releasing updated versions of their platforms on a more rapid cadence (Chrome and Firefox release new versions every 6 weeks). Over the last year, these updates have introduced new restrictions on plugins that have resulted in unplanned code changes and prolonged loss of functionality in some circumstances.

Installing and uninstalling plugins reliably has also increased in complexity. This aspect alone has made supporting plugin applications more difficult for software companies and even more frustrating for our users. In the case of Remote Control here at Kaseya, this is an important reliability issue that can no longer be compromised.

Browser manufacturers are becoming more like lightweight operating systems and are beginning to move in different directions. For example, Firefox is starting to move away from plugins altogether. Chrome is developing a proprietary plugin API which locks all plugin-based apps within their browser sandbox. This is primarily a security move by Google. Internet Explorer and Safari have not yet made a move on their plugin support, but we anticipate that this will happen soon.

So, the reports of the death of browser plugins may be an exaggeration, but not by very much. It doesn’t matter which browser you prefer, by the end of this year there are going to be very few opportunities for consistent plugin-based applications such as remote control.

So, if we can’t leverage browser plugins to deliver a great remote control experience where do we go next? The answer, oddly enough, is to return to an installed native application. An installed application provides us the ability to deliver a high performance solution with the ability capture and operate any aspect of a graphical operating system remotely.

You may be thinking that an installed app can solve these problems, but it brings back a host of headaches around distribution, installation, and communications that drove us to use browser plugins in first place. In addition, we want to continue to deliver an integrated management solution that includes remote control and the full breadth of other management capabilities for flexibility and efficiency.

Happily, Kaseya is in the position to help you succeed with remote desktop management in the post-plugin world. We are delivering a browser-free, highly performant installed application that connects in under six seconds to machines anywhere in the world. We can even do this over high latency, low bandwidth connections. We simplify the client-side deployment challenge by efficiently deploying the installed application with our agent so all the software you need to manage the endpoint, including remote control, is already there. Then finally we leverage a URI handler to seamlessly invoke the remote control capabilities from the comfort of the same browser the technical team uses to manage the rest of your environment.

Seem hard to believe? The coming out party for these capabilities will be at Kaseya Connect on April 14th. Stay tuned for more information, or better yet, register today for Kaseya Connect and come see it for yourself!

SaaSy and Smart

Is it time to get SaaSy and Smart with your IT Management solution?

Software-as-a-Service (SaaS) has become the solution for many areas of today’s businesses – CRM, HRM, ERP, Email, accounting, collaboration, field services, project management, office applications, recruiting, travel, etc., etc. Examples include Salesforce, NetSuite, BaseCamp, Office 365, Google Docs, QuickBooks, GoToMeeting, Concur, Taleo, Constant Contact; the list goes on. Arguably, most of these SaaS solutions are delivering the features, security, availability, and performance demanded by business managers. SaaS has become a way of improving services, freeing up resources, and reducing overall costs.

SaaS

So can SaaS for IT Management do this same thing for IT managers?

IT management is becoming increasingly challenging. Mobility, BYOD, virtualization, Cloud, the Internet of Things (IoT), etc. are all driving up IT complexity and making it difficult to keep management control. The technology is changing so fast it is impossible to keep up. Dr. James Canton, World Leading Authority on the Extreme Future, said, “Technology is changing at warp speed. If you were to disappear and come back after 90 days, the Net would have doubled, bandwidth would have increased by a third, and there would be a half a dozen innovations you would have missed.” If you are an IT Manager, you are afraid to take a vacation!

The right SaaS IT management solution may be just what is needed to help IT managers regain control of their ever-changing and complex world. With the right capabilities, SaaS IT management can allow the focus to be on keeping critical IT services working well for customers and users, rather than buying another tool, integrating another module, and updated another management software release.

Of course, for the hardcore IT manager, accessing his or her IT management solution from the cloud could be a bit scary. However, with the proper investigation and careful selection, SaaS is the IT Management answer for more and more IT organizations. But choosing the right solution can be difficult. So as you think about SaaS as a possibility, here are eight things to consider:

  1. SaaS Management Features: First and foremost, the SaaS solution must deliver the IT management features you need. It should be able to manage your entire IT environment – on-premise, cloud, and mobile – integrated into a single command center. It must allow you to manage every asset regardless of where it is: in the office, at home, or on-the-road. And you must be able to accomplish management actions with policy-based automation. Automation is the key to efficiency, and doing more with your current staff.
  2. Cloud Architecture: Look for a SaaS solution that is secure, reliable, and highly available. In particular, it must include secure, reliable access from the SaaS IT Management software to all of your IT assets, especially to the devices of your home office and mobile workforce. Technologies, such as FIPS certified 256-bit AES encryption, as part of smart agent technology, can allow easy anytime, anywhere, secure access.
  3. Breadth of Coverage: The SaaS IT management capabilities must be both wide and deep. Wide, meaning it addresses the full spectrum of management needs, such as monitoring, remote control, asset management, software deployment, patch management, anti-virus, backup, and security. And deep, meaning delivering these capabilities with a strong set of features — as in policy-based, automated deployment, configuration, execution, update, log, reporting, monitoring and remediation.
  4. Well Integrated: Too often, SaaS IT management offerings are limited to one or two functions. They provide remote control, or anti-virus, or patch, but do not integrate them all together. Insist on an integrated answer. Avoid choosing a SaaS solution that still requires inefficient, ineffective, “swivel-chair” or “Ctrl Alt” management.
  5. Manages Diverse Platforms: The platforms and devices you need to manage continue to proliferate. Windows and Linux, PCs and Mac, Android, iOS, Windows, Blackberry mobile devices, networks, storage, servers, applications, physical and virtual – all need to be covered by the SaaS solution. With many companies moving to BYOD (Bring-Your-Own-Device) and the IoT (Internet of Things), the challenge will only become greater. The SaaS vendor must be able to address these diverse platforms today, and have an architectural approach and a vision to cover your future needs.
  6. Fast Time-to-Value: One of the key reasons to move to a SaaS IT Management solution is agility, the ability to respond to demands, and move quickly. This can only be achieved if the SaaS vendor has the deep discovery, policy-based automation, and templates to help you begin managing new infrastructure and assets quickly and easily. Ease of implementation is key here.
  7. Simple Pricing: Avoid complex pricing models, based on a confusing selection of management modules, devices priced by size, number of CPUs, admin and user seats, etc. It will be difficult to keep track, and over time, your costs will grow beyond where you intended. Look for complete management packages, with simple pay-as-you-go, device-based pricing modules.
  8. SaaS Philosophy: Choose a SaaS IT management provider who is committed to SaaS, and has the financial strength to stay the course. A vendor with a focus on SaaS is more likely to have the architectural design, multi-tenancy, scale, security, and reliability features to ensure a positive experience today and tomorrow.

These eight areas provide an overview of the top considerations for SaaS IT Management. Over the coming weeks, I will explore each of these areas in a bit more depth. In the meantime, visit the Kaseya SaaS solution area to explore:

Give us your thoughts on SaaS!

Author: Tom Hayes

Page 1 of 3123»