IT Best Practices: Minimizing Mean Time to Resolution

Mean Time to Repair/Resolve

Driving IT efficiency is a topic that always makes it to the list of top issues for IT organizations and MSPs alike when budgeting or discussing strategy. In a recent post we talked about automation as a way to help reduce the number of trouble tickets and, in turn, to improve the effective use of your most expensive asset – your professional services staff. This post looks at the other side of the trouble ticket coin – how to minimize the mean time to resolution of problems when they do occur and trouble tickets are submitted.

The key is to reduce the variability in time spent resolving issues. Easier said than done? Mean Time to Repair/Resolve (MTTR) can be broken down into 4 main activities, as follows:

  • Awareness: identifying that there is an issue or problem
  • Root-cause: understanding the cause of the problem
  • Remediation: fixing the problem
  • Testing: that the problem has been resolved

Of these four components awareness, remediation, and testing tend to be the smaller activities and also the less variable ones.

The time taken to become aware of a problem depends primarily on the sophistication of the monitoring system(s). Comprehensive capabilities that monitor all aspects of the IT infrastructure and group infrastructure elements into services tend to be the most productive. Proactive service level monitoring (SLM) enables IT operations to view systems across traditional silos (e.g. network, server, applications) and to analyze the performance trends of the underlying service components. By developing trend analyses in this way, proactive SLM management can identify future issues before they occur. For example, when application traffic is expanding and bandwidth is becoming constrained or when server storage is reaching its limit. When unpredicted problems do occur, being able to quickly identify their severity, eliminate downstream alarms and determine business impact, are also important factors in helping to contain variability and deploy the correct resources for maximum impact.

Identifying the root cause is usually the biggest cause of MTTR variability and the one that has the highest cost associated with it. Once again the solution lays both with the tools you use and the processes you put in place. Often management tools are selected by each IT function to help with their specific tasks – the network group will have in-depth network monitoring capabilities, the database group database performance tools, and so on. These tools are generally not well integrated and lack visibility at a service level. Also correlation using disparate tools is often manpower intensive, requiring staff from each function to meet and to try to avoid the inevitable “finger-pointing”.

The service level view is important, not only because it provides visibility into business impact, but also because it represents a level of aggregation from which to start the root cause analysis. Many IT organizations start out by using open source free tools but soon realize there is a cost to “free” as their infrastructures grow in size and complexity. Tools that look at individual infrastructure aspects can be integrated but, without underlying logic, they have a hard time correlating events and reliably identifying root cause. Poor diagnostics can be as bad as no diagnostics in more complex environments. Investigating unnecessary down-stream alarms to make sure they are not separate issues is a significant waste of resources.

Consider the frequently cited cause of MTTR variability – poor application performance. In this case there is nothing specifically “broken” so it’s hard to diagnose with point tools. A unified dashboard that shows both application process metrics and network or packet level metrics provides a valuable diagnostic view. As a simple example, a response time application could send an alert that the response time of an application is too high. Application performance monitoring data might indicate that a database is responding slowly to queries because the buffers are starved and the number of transactions is abnormally high. Integrating with network netflow or packet data allows immediate drill down to isolate which client IP address is the source of the high number of queries. This level of integration speeds the root cause analysis and easily removes the finger-pointing so that the optimum remedial action can be quickly identified.

Once a problem has been identified the last two pieces of the MTTR equation can be satisfied. The times required for remediation and testing tend to be far less variable and can be shortened by defining clear processes and responsibilities. Automation can also play a key role. For example, a great many issues are caused by miss-configuration. Rolling back configurations to the last good state can be done automatically, quickly eliminating issues even while in-depth root-cause analysis continues. Automation can plays a vital role in testing too, by making sure that performance meets requirements and that service levels have returned to normal.

To maximize IT efficiency and effectiveness and help minimize mean time to resolution, IT management systems can no longer work in vertical or horizontal isolation. The inter-dependence between services, applications, servers, cloud services and network infrastructure mandate the adoption of comprehensive service-level management capabilities for companies with more complex IT service infrastructures. The amount of data generated by these various components is huge and the rate of generation is so fast that traditional point tools cannot integrate or keep up with any kind of real time correlation.

Learn more about how Kaseya technology can help you manage your increasingly complex IT services. Read our whitepaper, Managing the Complexity of Today’s Hybrid IT Environments

What tools are you using to manage your IT services?

Author: Ray Wright

Using IT Management Tools to Deliver Advanced Managed Services

IT Management Tools

The managed service marketplace continues to change rapidly, particularly as more and more businesses are adopting cloud-based services of one type or another. The cloud trend is impacting managed service providers in numerous ways:

  • Cloud migration increases the potential for consulting projects to manage the transition but reduces the number of in-house servers/devices that need to be monitored/managed, impacting retainer-based revenues.
  • Service decisions are more heavily influenced by line-of-business concerns as customers shift spending from capital to operating expenses.
  • Service providers need be able to monitor and manage application-related service level agreements, not just infrastructure components.
  • Managing cloud-based applications is less about internal cloud resources and their configuration (which are typically controlled by the cloud provider) and more about access, resource utilization and application performance.

To address these challenges and be able to compete in a marketplace where traditional device-based monitoring services are becoming commoditized, MSPs must create and sell more advanced services to meet the changing needs of their customer base. The benefits of delivering higher value services include greater marketplace differentiation, bigger and more profitable deals, and greater customer satisfaction.

Advanced services require the ability to deploy a proactive service level monitoring system that can monitor all aspects of a customer’s hybrid cloud environment including the core internal network infrastructure, virtualized servers, storage and both internal and cloud-based applications and resources. Predictive monitoring services help ensure uptime and increased service levels via proactive and reliable notification of potential issues. Advanced remedial actions should include not only support for rapid response when the circumstances warrant but also for regular customer reviews to discuss longer term configuration changes, additional resource requirements (e.g. storage), policy changes and so on, based on predictive SLA compliance trend reports.

Beyond advanced monitoring, professional skill sets are also very important particularly when it comes to managing new cloud-based applications services. To be able to scale to support larger customers requires tools that can help simplify the complexity of potential issues and speed mean time to resolution. If every complex issue requires the skills of several experts – server, database, network, application etc., it will be hard to scale your organization as your business grows. Having tools that can quickly identify root-causes across network and application layers is vital.

Cloud customers are increasingly relying on their managed service providers to “fix” problems with their cloud services, whether it be performance issues, printing issues, integration issues or whatever. Getting a rapid response from a cloud service provider who has “thousands of other customers who are working just fine” is hard to do, especially as they know problems are as likely caused by customer infrastructure issues as by theirs. Learning tools help MSPs capture and share experiences and turn them into repeatable processes as you address each new customer issue.

Automation is another must have for advance service providers. Again, scalability depends on being able to do as much as possible with as few resources as possible. For infrastructure management, automation can help with service and application monitoring as well as network configuration management. Monitoring with application-aware solutions is an attractive proposition for specific applications. For the rest, it helps to be able to establish performance analytics against key performance indicators and diagnostic monitoring of end-user transaction experiences. For network management, being able to quickly compare network configurations and revert back to earlier working versions is one of the fastest ways to improve mean time to repair. Automating patch management and policy management for desktops and servers also results in significant manpower savings.

Finally, tools which help manage cloud service usage are also invaluable as customers adopt more and more cloud services. Just as on premise license management is an issue for larger organizations so cloud service management, particularly for services such as Office 365, is also an issue. Access to email accounts and collaborative applications such as SharePoint, is not only a security issue it’s also a cost and potentially a performance issue.

Developing advanced services requires a combination of the right skills, resources, processes and technology; in effect, a higher level of organizational maturity. Larger customers will tend to have developed greater levels of process and IT maturity themselves, in order to be able to manage their own growing IT environment. When they turn to a managed service provider for help they will be expecting at least an equivalent level of service provider maturity.

Having the right tools doesn’t guarantee success in the competitive MSP marketplace but can help MSPs create and differentiate more advanced services, demonstrate the ability to support customer-required SLAs, and scale to support larger and more profitable customers.

Learn more about how Kaseya technology can help you create advanced managed service. Read our whitepaper, Proactive Service Level Monitoring: A Must Have for Advanced MSPs.

What tools are you using to create advanced managed services?

Author: Ray Wright

Deliver Higher Service Quality with an IT Management Command Center

IT Mission Control

NASA’s Mission Control Center (MCC) manages very sophisticated technology, extreme remote connections, extensive automation, and intricate analytics – in other words, the most complex world imaginable. A staff of flight controllers at the Johnson Space Center, manages all aspects of aerospace vehicle flights.  Controllers and other support personnel monitor all missions using telemetry, and send commands to the vehicle using ground stations. Managed systems include attitude control and dynamics, power, propulsion, thermal, orbital operations and many other subsystems. They do an amazing job managing the complexity with their command and control center. Of course, we all know how it paid off with the rescue of Apollo 13. For sure all astronauts appreciate the high service quality MCC provides.

The description of what they do sounds a bit like the world of IT management. While maybe not quite operating at the extreme levels of NASA’s command center, today’s IT managers are working to deliver ever higher service quality amid an increasingly complex world. Managing cloud, mobility, and big data combined with the already complex world of virtualization, shared infrastructure and existing applications, makes meeting service level expectations challenging to say the least. And adding people is usually not an option; these challenges need to be overcome with existing IT staff.

As is true at NASA, a centralized IT command center, properly equipped, can be a game changer. Central command doesn’t mean that every piece of equipment and every IT person must be in one location, but it does mean that every IT manager have the tools, visibility and information they need to maximize service quality and achieve the highest level of efficiency.  To achieve this, here are a few key concepts that should be part of everyone’s central command approach:

Complete coverage: The IT command center needs to cover the new areas of cloud (including IaaS, PaaS, SaaS), mobility (including BYOD), and big data, while also managing the legacy infrastructure (compute, network, storage, and clients) and applications. Business services are now being delivered with a combination of these elements, and IT managers must see and manage it all.

True integration: IT managers must be able to elegantly move between the above coverage areas and service life-cycle functions, including discovery, provisioning, operations, security, automation, and optimization. A command center dashboard with easy access, combined with true SSO (Single Sign-On) and data level integration, allows IT managers to quickly resolve issues and make informed decisions.

Correlation and root cause: The ability to make sense of a large amount of alerts and management information is mandatory. IT managers need to know of any service degradation in real-time before a service outage occurs together with the root cause. Mission critical business services are most often based on IT services, so service uptime needs to meet the needs of the business.

Automation: As the world of IT becomes more and more complex, it is impossible to achieve required service levels and efficiency when repetitive, manual, error-prone tasks are still part of the process. IT managers need to automate everything that can be automated! From software deployment and patch management to client provisioning and clean-up, automation is the key to higher service quality and substantial improvement in IT efficiency.

In addition to these requirements for central command, cloud-based IT management is now a very real option. With the growing complexity of cloud, mobility, and big data, along with the ever increasing pace of change, building one’s own command center is becoming more challenging. IT management in the cloud may be the right choice, especially for MSPs and mid-sized enterprises which do not have the resources to continually maintain and update their own IT management environment. Letting the IT management cloud provider keep up with the complexity and changes, while the IT operations team focuses on delivering high service quality, can drive down TCO, improve efficiency, and increase service levels.

Author: Tom Hayes

Building the World’s Fastest Remote Desktop Management – Part 3

ICE (Interactive Connectivity Establishment)

In previous installments of this series, we went over some key technologies used for the new Kaseya Remote Control solution, and some of the features that make it so fast.

But possibly the most important part of getting a fast and reliable Remote Control session is the network connectivity used under the hood. In this post, we cover the types of connectivity used for Kaseya Remote Control, the advantages of each, and how we combine them for additional benefit.

P2P Connectivity

Peer-to-peer connectivity is the preferred method of networking between the viewer application and the agent. It generally offers high throughput and low latency – and because the viewer and agent are connected directly, it places no additional load on the VSA server.

Kaseya Remote Control uses an industry standard protocol called ICE (Interactive Connectivity Establishment) to establish P2P connectivity. ICE is designed to test a wide variety of connection options to find the best one available for the current network environment. This includes TCP and UDP, IPv4 and IPv6, and special network interfaces such as VPN and Teredo.

In addition, ICE takes care of firewall traversal and NAT hole punching. To achieve this, it makes use of the fact that most firewalls and NATs allow reverse connectivity on ports that have been used for outbound connections. This ensures no additional firewall configuration is required to support the new Remote Control solution.

ICE will select the best available P2P connection based on the type of connectivity, how long each connection takes to establish, and a variety of other factors. In practice, this means you will usually get TCP connectivity on local networks, UDP connectivity when crossing network boundaries, and VPN connectivity when no other options are available.

However, testing a wide variety of connectivity options can take several seconds – and in some network environments, it may not be possible to get a P2P connection on any network interface. This brings us to…

Relayed Connectivity

As an alternative to P2P connectivity, Kaseya Remote Control also uses connections relayed through the VSA. Relayed connections are quick to establish and unlikely to be affected by firewalls or NAT devices. They also tend to be more stable over long periods of time, especially relative to P2P connections over UDP.

In practical terms, a relayed connection is made up of outbound TCP connections from the viewer and agent to the VSA, where they are linked up for bidirectional traffic forwarding.

To minimize the network impact, relayed connections from the agent use the same port on the VSA as the agent does for checkins. This means that anytime an agent can check in, it will also be able to establish a relay connection. Conversely, on the viewer side, relayed connections will use the same port on the VSA as the browser uses to view the VSA website: If one works, so will the other.

Combining P2P & Relayed Connectivity

It’s clear that P2P and relayed connectivity both have their distinct advantages, so it would be a shame to settle for just one or the other. To get the best of both worlds, the new Kaseya Remote Control uses both types of connectivity in parallel. In particular:

  • When a new Remote Control session starts, we immediately attempt to establish both types of connectivity.
  • As soon as we get a connection of any type, the session starts. Typically, relayed connectivity will be established first, so we’ll start with that. This results in very quick connection times.
  • With the session now underway, we continue to look for better connectivity options. In most cases, a P2P connection will become available within a few seconds.
  • When a P2P connection is established, the session will immediately switch over from relayed to P2P connectivity. This is totally seamless to the user, and causes no interruption to video streaming or mouse and keyboard events.
  • Even if a P2P connection is established, the relayed connection is maintained for the duration of the session. So if P2P connectivity drops off for any reason, Kaseya Remote Control will seamlessly switch back to the relayed connection, while attempting to establish a new P2P connection in the background.

The upshot of all this is that you can have have fast connection times, high throughput, low latency, and a robust Remote Control connection, all at the same time: No compromises required!

Let Us Know What You Think

The new Desktop Remote Control will be available with VSA 7.0 on May 31. We’re looking forward to getting it into customer hands, and receiving feedback. To learn more about Kaseya and our plans please take a look at our roadmap to see what we have in store for future releases of our products.

Mobile App Management for Ultimate Control of Mobile Devices

Mobile Device Managment

If you ever want to see top notch diligence, watch a first time mom pack her toddler’s lunch box – at least in my house. She packs every snack and meal in separate pouches and in measured quantities. She keeps all the unhealthy high-sugar stuff out and includes all my son’s favorite foods so that he eats well even when she is not around. The only other time I witnessed such meticulous control on what goes, what doesn’t, and how much, was by the IT staff at my previous company that had 100,000+ employees globally.

IT administrators, as caretakers of corporate IT infrastructure, have always wanted a close, tight and centralized control of every piece of IT asset to ensure business continuity, optimal performance, high productivity and bullet-proof security. As the size and complexity of infrastructure increases, the IT management challenges rise exponentially. Companies today extend mobile access to corporate data, along with company-owned mobile devices in many cases. About a decade ago companies trusted BlackBerrys for their secured encrypted access. There weren’t any third party phone apps that users could find and install by themselves on these devices. It was a thick-walled garden back then. But with the advent of Android, iOS and Windows mobile platforms the walls have come down and it has now become a fenceless park within a gated community.

The plethora of apps on each of these mobile platforms has given users the freedom to install any program that they believe will boost productivity. Consequently the line between corporate network and external world is getting blurred. This makes the IT staff, including the CIOs and CISOs nervous as it exposes sensitive corporate data to a potential security breach. The IT staff, therefore, now wants to control and restrict the mobile apps on phones that access corporate systems, data and applications. In the Mobile Device Management (MDM) parlance, this is known as Mobile App Management.

The Mobile App Management (MAM) capability allows IT admins to maintain an inventory of apps installed on all the mobile devices included in the corporate network. It also allows admins to create app catalogs to enforce policy compliance on these distributed mobile end points.

Some of the other key features of MAM include:

  • Remote monitoring of apps installed on devices by
    • Blocking apps through a “Disallowed apps policy” with on screen alerts and compliance view
    • Enforcing required apps through a “Required apps policy” that sends app invites with links and reminds users periodically if they are not in compliance
  • Capability to push enterprise apps to devices remotely
  • Support for App Store applications

When integrated with an enterprise IT management solution that IT admins use to monitor network, servers, desktops, and laptops remotely, such as Kaseya VSA with MDM, MDM/MAM enables centralized control of the IT infrastructure, including mobile workforce, from a single pane of glass. In addition to MAM, Kaseya MDM solution also offers features like loss or theft handling, geo location tracking, device locking, audit and complete device management and configuration. It enables IT admins to treat mobile devices just like computers.

In summary, Mobile App Management is a critical aspect of MDM to ensure security of enterprise data accessed from mobile devices. An MDM solution that integrates with a centralized IT management solution provides a powerful tool for IT staff to command centrally and manage remotely their enterprise IT infrastructure including the mobile workforce.

Author: Varun Taware

Why IT Operations Needs a Comprehensive IT Management Cloud Solution

Performance-related issues are among the hardest IT problems to solve, period. When something is broken alarm bells sound (metaphorically in most cases) and alerts are sent to let IT ops know there’s an issue. But when performance slows for end-user applications there’s often no notification until someone calls to complain. Yet, in reality, losing 10 or 15 seconds every minute for an hour is at least as bad as an hour of downtime in an 8 hour day. At least as bad, and maybe worse – there’s the productivity hit but there’s also significant frustration. At least when the system is really down, users can turn their attention to other tasks while the fixes are installed.

IT Systems Management

One reason why this remains an ongoing issue for many IT organizations is that there are few management tools that provide an overall view of the entire IT infrastructure with the ability to correlate between all of its different components. It’s not that there aren’t individual tools to monitor and manage everything in the system, it’s that coordinating results from these different tools is time consuming and hard to do. There are lots of choices when it comes to server monitoring, desk-top monitoring, application monitoring, network monitoring, cloud monitoring etc., and there are suites of products that cover many of the bases. The challenge is that in most cases these management system components never get fully integrated to the point where the overall solution can find the problem and quickly identify root-cause.

If IT was a static science it’s a good bet that this problem would have been solved a long time ago. But as we know, IT is a hot bed for innovation. New services, capabilities and approaches are released regularly and the immense variety of infrastructure components supporting today’s IT environments make it difficult for monitoring and management vendors to keep up. New management tools appear very frequently too, but the cost and effort of addressing existing infrastructures is often cost-prohibitive for start-ups trying to get their new management capabilities to market quickly.

The complexity and pace of change lead some IT organizations to rely on open source technologies and “freeware” with the benefit that capital costs or operational expenses are kept to a minimum. Yet the results of using such tools are often less than satisfactory. While users can benefit greatly from the community of developers, it’s often hard to get a comprehensive product without buying a commercially supported version. Another issue for open source IT management solutions is that they’re generally not architected to support a large and increasingly complex IT infrastructure – at least in a way that makes it possible to quickly perform sophisticated root-cause analyses. The result is that while the tools may be inexpensive, the time and resources needed to use them can be much greater and their impact less than satisfactory.

IT management is its own “big data” problem.

As IT infrastructure continues to become ever more complex, IT management is becoming its own big data problem. Querying an individual device or server to check status and performance may retrieve only a relatively small amount of data to be sent to the management or monitoring system; a small volume of data but likely a diverse set of information indicating the status of numerous parameters and configuration items. Polling mobile devices and desk-tops, servers, applications, cloud services, hypervisors, routers, switches, firewalls ….generates a whole lot of data, each different item having its unique set of parameters and configurations to retrieve. Polling hundreds, thousands or even tens of thousands of devices every few minutes (so that the management system will be current with device status) can create significant network traffic volume that must be supported without impacting business applications. On top of that the volume of data, the polling frequency, and the resultant velocity of traffic must be accommodated to support storage, trend analysis and real-time processing. System management information is usually stored for months or even years, in order that historical trends and analyses can be performed. But most importantly the management system needs to rapidly process information in real-time to correlate cause and effect, disable downstream alert and alarm conditions and perform predictive analysis so that valid messages can be proactively sent to alert IT ops. Now system management architecture becomes important. Add to that the need for flexibility to accommodate the ever changing IT landscape and management system design to support this “big data application” becomes a critical issue.

This, in part, is why IT management vendors are migrating their solutions to the cloud. As IT infrastructures continue to expand in terms of size, complexity and diversity the ability to support the volumes of data generated and the performance needed to undertake comprehensive root-cause analysis becomes more and more challenging for purely on-premise solutions. In addition, cloud solutions offer the promise of “best practice” advice that can only be derived from a shared-service model, with the caveat that security and privacy remain paramount.

Of course, cloud solutions, with their pay-as-you-use pricing and (on premise) infrastructure free installations are also becoming far more attractive than perpetual licensing arrangements. However, the bottom line is cloud-architected solutions are extremely extensible and able to more quickly and easily accommodate new functionality to the benefit of all users. Not the least of which is the ability to deploy better diagnostic tools and capabilities to support the needs of today’s diverse IT user communities for high levels of systems availability AND performance.

Author: Ray Wright

3 Keys to Managing Today’s Complex IT Infrastructures

IT Management

Many factors contribute to the increasing complexity of business IT environments today; the rapid adoption of cloud computing, big data and mobile device proliferation to name a few. These and other key trends are tho’ are making it harder for organizations to effectively and efficiently manage and secure the environment and to assure IT service levels and business success.

Newer requirements include controlling mobile devices, maintaining visibility into virtualized resources and services, and achieving increasingly demanding SLAs for critical business applications. Effective IT service management becomes more challenging when services are reliant on dynamically shared resources and when some resources are on-premise and some are in a public cloud.

Meeting these challenges has often been labor intensive and costly because the available IT management tools have been narrowly focused and poorly integrated. This post outlines key management concerns and identifies what IT professionals should look for when determining the best approach to addressing management complexity.

Factors Driving Change

Cloud computing, mobility and big data are being adopted by enterprises of all sizes. However delivering the benefits reliably and consistently across a distributed organization often requires a complex combination of infrastructure and support technologies.

Cloud Computing – The cloud is growing in popularity because it provides organizations with faster access to applications and services while reducing the development cost. However, as these applications and services get integrated into existing processes and with existing applications, both the new and the old need to be managed together. New tools are needed to manage cloud infrastructures and applications, but these cannot be separate from the tools managing legacy applications and the existing infrastructure. More tools equal more complexity, increased staffing requirements and lower efficiency.

Mobility – Employees today need to be able to work remotely – from home and from the road. Sometimes they use company owned and provided devices and sometimes they prefer to use their own devices (commonly called Bring Your Own Device – BYOD. Who wants to carry two smart phones these days, supporting separate contact lists and email accounts?) As a result companies have to provide management solutions that address both of these scenarios. For company owned devices, secure access, data back-up and loss prevention are critical issues. For BYOD, the same things are important but it’s also necessary to distinguish between personal files – contact, photo’s, calendar – and business information, so that the loss of the device doesn’t necessarily mean loss of personal information.

Big Data – For most small and mid-sized businesses, refers to working with and obtaining results from the analysis of large and complex data sets provided by Saas companies, third party vendors and service providers. For example, utilizing social media data to identify market opportunities and target prospects. The issue is that big data fosters changes in company operational approaches and increases the number and type of users who need information access. This increases the number of users, the types of devices, the network traffic, the importance of data integrity, the need for system reliability and performance, the volume of stored data, the number of application interactions…. the infrastructure becomes more complex and now carries more information vital to business success. Accordingly system management and the maintenance of service levels become increasingly important too.

Management Impact

The management implication is that the new capabilities need to be managed along with legacy infrastructures and applications in an integrated, automated fashion. Integration is important because new and legacy resources together deliver services to the business. Understanding relationships, dependencies, security, and performance is vital to meeting business service commitments. Automation is important because of the increasing complexity and growing number of management tasks which can no longer be managed with manual approaches.

In fact, IT management can itself become a big a data problem. The scale of data created from IT management systems – the collection of frequently polled device management data, events, logs, etc. is very significant. Real-time analysis and reporting on this data are required to make the actionable decisions necessary to keep the new “hybrid” IT environments performing and meeting needs of the business.

What to look for in an IT management solution

What’s needed is an integrated, comprehensive and cloud-based management tool, with extensive automation capabilities. Tools should meet the following three requirements. Be able to:

  1. Manage cloud infrastructure and application services along with legacy on-premise services with an integrated management system, all within a single command center.
  2. Manage company-owned and employee-owned mobile devices, along with traditional end user clients, as part of an integrated management solution, including the ability to remotely access devices anytime, anywhere.
  3. Automate every manual, repetitive task possible to maximize IT efficiency and reduce human error.

Kaseya’s IT management solution integrates a wide range of management capabilities to enable IT organizations and MSPs to command everything within IT centrally, to manage remote and distributed environments with ease, and to automate all aspects of IT management, delivering higher service quality and greater IT efficiency. Kaseya enables IT professionals to manage all aspects of the IT environment – including on-premise, cloud, hybrid-cloud, virtualized, distributed and mobile components. And Kaseya’s solution itself is delivered via the Kaseya IT management cloud or as on-premise software.

Author: Ray Wright

Users Are Going Mobile And So Should Your IT Management

mobile device

Circa 2003: I needed to print sensitive corporate data for a client meeting the next morning. Those files were stored on a remote corporate server. I logged on to a company desktop, connected to the corporate LAN and printed the files. The next morning, I realized I got the wrong versions. I hurried back to the office, logged on to the desktop and printed the correct files. I barely made it in time for the client meeting and forgot my glasses in the cab as I reviewed the content on the way. Around this time, BlackBerry launched their first smartphones capable of email and web surfing. By the end of 2003, mobile Internet users were virtually nonexistent.

Circa June 2007: Enter Steve Jobs with the iPhone, which completely redefined smartphones as pocket-sized mini-computers. By the end of 2007, there were 400 million mobile internet users globally.

Today (2014): Got a smartphone…check. Got a tablet…check. Got iOS and Android devices…check and check. Setup office applications on them…done. Need to look up corporate files? Wait a minute…and done! Today, there are more than 1.5 billion mobile internet users in the world and very soon they will surpass the internet users on desktops.

Since 2007: the adoption of internet-capable smartphones has been stupendous. Almost every corporate employee today owns a smartphone for personal and/or office use. Mobile access to corporate information boosts business productivity (except when you are busy checking Facebook). This in turn helps increase job satisfaction of employees and keeps the company agile and responsive to business needs on the go. This is the essence of workforce mobility. But, in order to be future-proof, let’s not misunderstand mobility as the mere use of mobile phones to access data. The definition of “mobile” in this context should entail any computing device, capable of wireless data communication, that moves around with you (i.e. smartphone, tablet, smart watch, or google glass — if that ever takes off). And, who knows we may have the “smart paper” coming up soon.

This proliferation of mobile devices, in volume and in platform diversity, increases the challenges for IT management. The higher the number of endpoints that access enterprise data, the greater is the exposure to security risks.

The rapid adoption of mobile devices drives two important trends for the IT management staff, namely:

1. Mobile Device Management (MDM) – Controlling and managing company owned mobile devices provided to employees

2. Bring Your Own Device (BYOD) – Controlling the enterprise data access on employee’s personal mobile devices

These two capabilities together are often referred to as Enterprise Mobility Management (EMM). To stay modern and current with the latest technology access, a company’s IT policy has to evolve to support these trends.

MDM has been around since the smartphones from BlackBerry were introduced in the market a decade ago. It entails complete control of the data, applications and device functionality by the company. It is a logical extension of the approach companies have adopted over the years to manage desktops, servers and laptops. But the lack of an integrated solution to treat these mobile devices as “just another IT asset” (at a high level) puts tremendous strain on the IT management staff to enforce IT polices consistently on these distributed diverse mobile endpoints. This is particularly important as the definition of mobile devices gets expanded – e.g. wearable gadgets, which may become all-pervasive in the future.

Additionally, with the advent of all-powerful smartphones, employees are now demanding access to corporate data and applications on their personal mobile devices too. But they do not want the corporate IT guys controlling the non-corporate stuff that goes on their personal devices. This trend for BYOD is strongest among the millennials entering the workforce. IT managers have a real challenge pivoting their corporate policy to manage data instead of managing devices. Maintaining a fine balance between corporate control and freedom of personal use should not be an art, but a logical and simplified process using a robust integrated solution such as Kaseya’s Enterprise Mobility Management. Such a solution should allow IT admins to:

  • Command centrally and manage remotely for simplified and efficient IT management of the mobile workforce.
  • Redefine the logical end point to be the device data and not the device itself so that security is enforced by managing data, not devices. This is possible by building secure “containers” to isolate corporate data from personal data on the mobile devices, which can be encrypted and/or wiped off without touching the personal data on the device.
  • Enforce strong policy management and support through Active Directory/ LDAP integration.
  • Unified endpoint management experience by allowing management of all mobile devices and other IT assets through a single pane of glass.
  • Simplify mobile application management which typically includes maintaining app catalogs for blocked apps, mandatory apps, inventory of installed apps and ability to push enterprise apps remotely.
  • Ensure encryption and security of enterprise data at every step including while at rest on the device, as well as during transmission between the device and enterprise servers.

In summary, the mobile adoption is booming and it mandates the IT management to evolve too. This is possible only through a robust, integrated IT management solution that enables unified endpoint management for mobile devices.

Author: Varun Taware

Half of the Top-Ranked MSPs use Kaseya

MSPMentor MSP501

Kaseya has always been at the front of the pack in terms of solutions that successful MSPs use to run their businesses.  2014 is no exception. Once again, we are honored and humbled to announce that Kaseya MSPs continue to dominate the MSPMentor 501 list for 2014, with our customers claiming nearly 50% of the top 100 spots!  Congratulations to you from all of us.

The 501 companies on the list reported a 28% increase in recurring revenue from 2012 to 2013, 33% more PCs under management, and 32% more servers and network devices under management. Wow!  We are thrilled to see our customers achieve significant and sustainable growth in their businesses.  It’s a validation that MSPs CAN grow and DO grow with Kaseya.

We were happy to see a number of these top MSPs at our recent annual user conference, Kaseya Connect held earlier this month.  In addition to having the opportunity to congratulate them in person, we were able to hear firsthand about their challenges and opportunities.  And they were loud and clear. With the prevalence of cloud, mobility, virtualization and other IT trends, they need an integrated solution to manage all of IT simply, centrally and automatically.  We’re so happy that Kaseya is that solution.

For those not yet Kaseya customers, we invite you to see how and why the top MSPs power their business with us.  Register for a live demo and learn how we can help you grow your business.

 

Building The World’s Fastest Remote Desktop Management – Part 2

Kaseya Remote Control

In his earlier blog post, Chad Gniffke outlined some of the key technologies underpinning the new Kaseya Remote Control solution in VSA release 7.0. This includes using modern video codecs to efficiently transmit screen data.

Going beyond these items, the engineering team at Kaseya has looked at every aspect of the remote desktop management workflow to shave precious seconds off the time required to establish a session.

In this post, we review three changes that have a big impact on both the time to connect and the experience once connected.

Keeping it Lean

When it comes to performance, what’s sometimes more important than what you do is what you don’t do. In the new Kaseya Remote Control, we have applied this principle in several areas.

When first connecting to a new agent, downloading remote desktop management binaries to the agent will represent a substantial portion of the connect time. With the new Kaseya Remote Control in VSA 7.0, this delay has been completely eliminated: Everything needed for Remote Control is now included with the agent install itself, and is always available for immediate use.

Likewise, the time to schedule and run an agent procedure against an agent has traditionally accounted for a large portion of the time to connect. Rather than attempt to optimize this, the new Remote Control doesn’t run an agent procedure at all. Instead, it maintains a persistent connection to the VSA server over a dedicated network connection that’s always on, and always available to start Remote Control immediately.

Making it Parallel

Establishing a Remote Control session involves a surprising number of individual steps. In broad strokes, we need to:

  • Launch the viewer application.
  • Establish a connection from the viewer to the VSA server.
  • Perform encryption handshakes to ensure each connection is secure.
  • Send Remote Control session details to the agent.
  • Wait for the user to accept the Remote Control session (if required by policy).
  • Establish relayed connectivity.
  • Collect network candidates for P2P connectivity.
  • Transmit P2P connection candidates over the network.
  • Perform P2P connectivity tests.
  • Select the best available network connection to start the session on.

But it turns out most of these steps can be performed in parallel – at least to some degree. For example, the information required to start a P2P connection to an agent can be collected while establishing an encrypted connection to the VSA. If user acceptance is required, a complete P2P connection can usually be negotiated long before the user approves the session. This dramatically reduces the overall time required to establish each session.

Utilizing the Hardware

Once connected to a remote agent, Kaseya Remote Control will start streaming full screen video data over the network connection, and drawing it to the viewer’s screen. The video codec under the hood ensures that a minimal amount of data is sent over the network, especially if nothing much is changing on screen. But on the viewer side, we still need to render the entire remote image to screen, at up to 20 frames per second. This can result in increased CPU load and battery drain on the viewer machine.

To reduce the impact on the viewer side, the new Kaseya Remote Control in VSA 7.0 now uses graphics hardware to scale and render the remote video content to screen. Modern graphics cards can perform these operations very efficiently, resulting in a reduced drain on system resources. This will be especially obvious when maintaining long-running connections to multiple remote agents.

Diving Deeper

These items represent a handful of the many changes going into our new Kaseya Remote Control to speed up connect times, and improve the experience once connected.

To find out more, stop by the Product Lab at Kaseya Connect in Las Vegas next week! And watch this space for a future post about the brand new P2P connection establishment technology that forms the backbone of our next generation Kaseya Remote Control.

Page 4 of 45« First...«23456»102030...Last »