Three Key Monitoring Capabilities for VMware Virtualized Servers

VMware Virtualized ServersThe percentage of servers which are virtualized continues to grow, but management visibility continues to be a challenge. In this blog post we look at the three key monitoring capabilities – full metal, datastore, and performance – to give you the visibility and control you need to keep your virtualized applications performing well.

Before we start, below is a description of the information models which are important to hypervisor management:

Common Information Model

Common Information Model or CIM is an open standard that defines management and monitoring of devices and elements of devices in a datacenter.

VMWare infrastructure API

The VI API is a proprietary implementation of CIM provided by VMWare for management and monitoring of components related to the VMWare hypervisor.

Full metal monitoring

Fan status

The fan is essential for proper server function. When rack density goes up, server volume shrinks and fans need to work at higher speeds, which mean more wear and tear. A broken fan in a server can quickly cause major heat build up that affects the server and possibly neighbouring servers. The good thing is that it’s relatively easy to monitor the state of the fans. The CIM_fan class exposes a property called HealthState that contains information about the health of a fan: OK, degraded state, or failed.

PSU health

Power supply health is important to monitor. Most enterprise servers can be configured to have redundant power supplies. In addition, its good to have a spare in backup. OMC_Powersupply is a class the exposes the “HealthState” property for each PSU in your server. Just like the fan health, the PSU is determined to be OK, degraded, or failed.

Power usage

VI API can be used to measure average power usage, which gives an indication of the server utility cost. More power usage means more heat, which equals even more utility costs in the form of heat dissipation.The VI API counter power.power.average results looks like this:

VMware Virtualized Servers

Raid controller, storage volume and battery backup

Three key storage elements that you should monitor are the raid controller, storage volumes and the battery. The controller and disks seems obvious, but the battery? In many cases a high performance raid controller will have a battery to backup the onboard memory incase of a power outage. The memory on the controller is most commonly used for write back caching and when the server loses power, the battery ensures that the cache remains consistent until you restore power to the server and its content can be written to disk.

Datastore monitoring

Utilization, IOPs and latency are metrics that should be monitored and analyzed together. When you have performance problems in a disk subsystem, an “ok” latency can tell you to go and look for problems with IOPs. High utilization can tell you why you may not get the expected IOPs out of the system and so on.

Utilization

The utilization can be calculated using the capacity and freespace properties of the DatastoreSummary object.

IOPs

IO operations per second can be monitored using a VI API datastore.datastoreIops.average counter; which provides an average of read and write io operations.

Latency

Latency can be measured using datastore.totalWriteLatency.average and datastore.totalReadLatency.average counters. They will show you average read and write latency for the whole chain, which includes both kernel and device latency.

Performance monitoring

CPU

Threads scheduled to run on a CPU can either be in two states: waiting or ready. Both of these states can tell a story about resource shortage. The lesser evil of the two is the wait state, which indicates that the thread is waiting for an IO operation to complete. This can be as simple as waiting for a answer on a host external resource, or waiting on disk time. The more serious state is the so called “ready state” which indicate that the thread is ready to run, but there is no CPU free to server it.

VMware Virtualized Servers 3

Memory ballooning and IOPS

Memory ballooning is a process that can happen when a host experiences a low memory condition and probes the virtual machines for memory to free up. The balloon driver in each VM tries to allocate as much memory as possible within the VM (up to 65% of the available VM memory), and the host will free this memory to add to the host memory pool.

The memory ballooning counter, mem.vmmemctl.average, can show when this happens. So how can memory ballooning make a dent in your IO graph you may ask? After the host reconciles memory from VMs, these VMs can start to use their own virtual memory and start page memory blocks to disk, which is why memory ballooning may proceed a higher than normal IO operation.

Memory swapping

Ballooning may happen even if there is no issue; its a strategy for the host to make sure there is free memory for any VM to consume. Host swapping however is always a sign of trouble. There are a number of counters that you want to monitor:

mem.swapin.average
mem.swapout.average
mem.swapinRate.average
mem.swapoutRate.average

These counters show, both in cumulative and in rate, how much memory is swapped in/out. Host memory swapping is double trouble. No only does it indicate that you have a low host memory situation, it is also going to affect IO performance.

Final words

Monitoring, reporting and notification on all these metrics can be a challenge. The good news for Kaseya customers is that you can implement the monitoring described in this article using the new network monitor module in VSA 7.0, available now.

References:

Author: Robert Walker

Using Marketing and IT Automation with the Cloud to build really cool marketing campaigns

IT Automation

Marketing and IT are moving closer and closer in this current era of digital marketing. The amount of IT systems a marketer has to work with has grown exponentially. This also means the job of a marketer involves more and more close collaboration with the IT department. In this blog I’d like to share a bit of my experience as marketing automation expert for Kaseya and hopefully interest more people in joining the marketing automation movement.

Marketing automation leverages IT systems for marketing efforts in a similar way as Kaseya does for IT systems management. The idea is to automate process with the goal of reducing the manual efforts required to run marketing campaigns. There are many levels at which this can be achieved and I am proud to say that here at Kaseya, we have made immense steps over the past years in achieving higher levels of marketing automation.

It seems like ages ago (but actually isn’t) that marketing methodology was to send out manually printed batches of letters. Nowadays we have completely automated our marketing efforts. A good example of this is that we at Kaseya are constantly nurturing our customer database in our Netsuite CRM system (yes there is the first bit of cloud based IT) with our marketing automation solution, Marketo (and there is another one). Some of these marketing activities involve inviting customers for online webinars (another cloud based IT solution) or promoting the fact that we have a free cloud trial of our products available. I think that an important point to make here is that we believe that the future is in the cloud-based solutions and Kaseya is leading the industry in offering IT management cloud solutions. So we are not just promoting cloud-based IT solutions, we are also implementing them.

One of the interesting things is that this digital invasion into the marketing department has shown us that we cannot do without good IT support. Our jobs rely on the systems to be running, whether they are cloud based offerings that need to be online 24/7 or personal devices such as laptops and tablets that are used to create even more interesting campaigns. Also strong automation is making our lives much easier and as marketing automation expert, I always feel there is nothing cooler than a completely automated campaign that actually starts to live its one life and actually improves our service offering to prospects and customers. Also aspects of IT systems management such as reporting are things we as a marketing department also run into. Because just as IT wants to know what assets are being managed and how they are performing, so does marketing want insight into the running of campaigns and performance of these. Also, as with IT departments, we as marketers are also continuing to improve our process and to adapt to the changing business environment and resulting new opportunities.

Overall, the link between automated marketing, IT and the new cloud services can help marketing drive really cool marketing campaigns.

If you have any questions or would like to share your personal experiences either as a marketer or as an IT manager about marketing automation I’d love to read your comments.

MSP Best Practice: Thoughts on Being a Trusted Advisor

Kaseya

Nothing is more important for MSPs than retaining existing customers and having the opportunity to upsell new and more profitable services. The cost of customer acquisition can make it challenging to profit significantly during an initial contract period. Greater profitability typically comes from continuing to deliver services year-after-year and from attaching additional services to the contract as time goes on. Becoming a trusted advisor to customers, so that you are both highly regarded and have an opportunity to influence their purchase decisions, has always been important to this process.  However, how you become a trusted advisor and how quickly, depend on some key factors.

Challenge your customer’s thinking

According to Matthew Dixon and Brent Adamson, authors of “The Challenger Sale*”, it’s not what you sell that matters it’s how you sell it! When discussing why B2B customers keep buying or become advocates – in short, how they become loyal customers – the unexpected answer is that their purchase experiences have more impact than the combined effect of a supplier’s brand, products/services, and their value-add!

Not that these factors aren’t important – they clearly are vital too – it’s just that beyond their initial purchase needs, what customers most want from suppliers is help with the tough business decisions they have to make. This is especially true when it comes to technology decisions. The best service providers have great experience helping other, similar, companies solve their challenges and are willing to share their knowledge. They sit on the same side of the table as the customer and help evaluate alternatives in a thoughtful and considerate fashion. In short, they operate as trusted advisors.

The key is to start out as you mean to continue. How can you make every customer interaction valuable from the very outset, even before your prospect becomes a customer? Dixon and Adamson suggest the best way is to challenge their thinking with your commercial insight into their business. What might be the potential business outcomes of contracting your managed services? Yes, they will benefit from your expertise in monitoring and maintaining their IT infrastructure, but in addition, how can the unique characteristics of your services and your professional resources enable new business opportunities for them? What might those opportunities be?

Tools matter

Beyond insights gained from working closely with other customers, having the right tools can have a significant impact too. For example, there are monitoring and management tools that can be used to provide visibility into every aspect of a customer’s IT environment. But tools which are focused on a single device or technology area or are purely technical in nature, have only limited value when it comes to demonstrating support for customers’ business needs. Most customers have a strong interest in minimizing costs and reducing the likelihood and impact of disruption, such as might be caused by a virus or other malware. Being able to discuss security, automation and policy management capabilities and show how these help reduce costs is very important during the purchase process.

Customers who absolutely rely on IT service availability to support their business require greater assurance. Tools that cut through the complexity and aggregate the details to provide decision makers the information they care about, are of immense value. For example, the ability to aggregate information at the level of IT or business service and report on the associated service level agreement (SLA). Better still, the ability to proactively determine potential impacts to the SLA, such as growing network traffic or declining storage availability, so that preventative actions can take place. Easy to understand dashboards and reports showing these results can then be used in discussions about future technology needs, further supporting your trusted advisor approach.

With the right tools you also have a means to demonstrate how you will be able to meet a prospect’s service-level needs and their specific infrastructure characteristics, during the sales process. In IT, demonstrating how you will achieve what you promise is as important, if not more so, than what you promise. How will you show that you can deliver 97% service availability, reliably and at low risk to you and to your potential customer? Doing so adds to your trusted advisor stature.

Thought leadership

Unless you have a specific industry focus, most customers don’t expect you to know everything about their business – just about how your services and solutions can make a strong difference. They will prefer to do business with market leaders and with those who demonstrate insight and understanding of the latest information technology trends, such as cloud services and mobility, and the associated management implications. First, it reduces their purchase risk and makes it more likely that they will actually make a decision to purchase. Secondly it adds credibility, again enabling them to see you as a trusted advisor, one who demonstrates a clear understanding of market dynamics and can assist them in making the right decisions.

Credibility depends a lot on how you support those insights with your own products and services. Are you telling customers that cloud and mobile are increasingly important but without having services that support these areas? What about security and applications support?

Being a trusted advisor means that you, and your organization, must be focused on your customers’ success. Make every interaction of value, leverage the right tools and deliver advanced services and you will quickly be seen as a trusted advisor and be able to turn every new customer into a lasting one and a strong advocate.

Learn more about how Kaseya technology can help you create advanced managed services. Read our whitepaper, Proactive Service Level Monitoring: A Must Have for Advanced MSPs.

*Reference: The Challenger Sale, Matthew Dixon and Brent Adamson, Portfolio/Penguin 2011

What tools are you using to manage your IT services?

Author: Ray Wright

IT Best Practices: Minimizing Mean Time to Resolution

Mean Time to Repair/Resolve

Driving IT efficiency is a topic that always makes it to the list of top issues for IT organizations and MSPs alike when budgeting or discussing strategy. In a recent post we talked about automation as a way to help reduce the number of trouble tickets and, in turn, to improve the effective use of your most expensive asset – your professional services staff. This post looks at the other side of the trouble ticket coin – how to minimize the mean time to resolution of problems when they do occur and trouble tickets are submitted.

The key is to reduce the variability in time spent resolving issues. Easier said than done? Mean Time to Repair/Resolve (MTTR) can be broken down into 4 main activities, as follows:

  • Awareness: identifying that there is an issue or problem
  • Root-cause: understanding the cause of the problem
  • Remediation: fixing the problem
  • Testing: that the problem has been resolved

Of these four components awareness, remediation, and testing tend to be the smaller activities and also the less variable ones.

The time taken to become aware of a problem depends primarily on the sophistication of the monitoring system(s). Comprehensive capabilities that monitor all aspects of the IT infrastructure and group infrastructure elements into services tend to be the most productive. Proactive service level monitoring (SLM) enables IT operations to view systems across traditional silos (e.g. network, server, applications) and to analyze the performance trends of the underlying service components. By developing trend analyses in this way, proactive SLM management can identify future issues before they occur. For example, when application traffic is expanding and bandwidth is becoming constrained or when server storage is reaching its limit. When unpredicted problems do occur, being able to quickly identify their severity, eliminate downstream alarms and determine business impact, are also important factors in helping to contain variability and deploy the correct resources for maximum impact.

Identifying the root cause is usually the biggest cause of MTTR variability and the one that has the highest cost associated with it. Once again the solution lays both with the tools you use and the processes you put in place. Often management tools are selected by each IT function to help with their specific tasks – the network group will have in-depth network monitoring capabilities, the database group database performance tools, and so on. These tools are generally not well integrated and lack visibility at a service level. Also correlation using disparate tools is often manpower intensive, requiring staff from each function to meet and to try to avoid the inevitable “finger-pointing”.

The service level view is important, not only because it provides visibility into business impact, but also because it represents a level of aggregation from which to start the root cause analysis. Many IT organizations start out by using open source free tools but soon realize there is a cost to “free” as their infrastructures grow in size and complexity. Tools that look at individual infrastructure aspects can be integrated but, without underlying logic, they have a hard time correlating events and reliably identifying root cause. Poor diagnostics can be as bad as no diagnostics in more complex environments. Investigating unnecessary down-stream alarms to make sure they are not separate issues is a significant waste of resources.

Consider the frequently cited cause of MTTR variability – poor application performance. In this case there is nothing specifically “broken” so it’s hard to diagnose with point tools. A unified dashboard that shows both application process metrics and network or packet level metrics provides a valuable diagnostic view. As a simple example, a response time application could send an alert that the response time of an application is too high. Application performance monitoring data might indicate that a database is responding slowly to queries because the buffers are starved and the number of transactions is abnormally high. Integrating with network netflow or packet data allows immediate drill down to isolate which client IP address is the source of the high number of queries. This level of integration speeds the root cause analysis and easily removes the finger-pointing so that the optimum remedial action can be quickly identified.

Once a problem has been identified the last two pieces of the MTTR equation can be satisfied. The times required for remediation and testing tend to be far less variable and can be shortened by defining clear processes and responsibilities. Automation can also play a key role. For example, a great many issues are caused by miss-configuration. Rolling back configurations to the last good state can be done automatically, quickly eliminating issues even while in-depth root-cause analysis continues. Automation can plays a vital role in testing too, by making sure that performance meets requirements and that service levels have returned to normal.

To maximize IT efficiency and effectiveness and help minimize mean time to resolution, IT management systems can no longer work in vertical or horizontal isolation. The inter-dependence between services, applications, servers, cloud services and network infrastructure mandate the adoption of comprehensive service-level management capabilities for companies with more complex IT service infrastructures. The amount of data generated by these various components is huge and the rate of generation is so fast that traditional point tools cannot integrate or keep up with any kind of real time correlation.

Learn more about how Kaseya technology can help you manage your increasingly complex IT services. Read our whitepaper, Managing the Complexity of Today’s Hybrid IT Environments

What tools are you using to manage your IT services?

Author: Ray Wright

Using IT Management Tools to Deliver Advanced Managed Services

IT Management Tools

The managed service marketplace continues to change rapidly, particularly as more and more businesses are adopting cloud-based services of one type or another. The cloud trend is impacting managed service providers in numerous ways:

  • Cloud migration increases the potential for consulting projects to manage the transition but reduces the number of in-house servers/devices that need to be monitored/managed, impacting retainer-based revenues.
  • Service decisions are more heavily influenced by line-of-business concerns as customers shift spending from capital to operating expenses.
  • Service providers need be able to monitor and manage application-related service level agreements, not just infrastructure components.
  • Managing cloud-based applications is less about internal cloud resources and their configuration (which are typically controlled by the cloud provider) and more about access, resource utilization and application performance.

To address these challenges and be able to compete in a marketplace where traditional device-based monitoring services are becoming commoditized, MSPs must create and sell more advanced services to meet the changing needs of their customer base. The benefits of delivering higher value services include greater marketplace differentiation, bigger and more profitable deals, and greater customer satisfaction.

Advanced services require the ability to deploy a proactive service level monitoring system that can monitor all aspects of a customer’s hybrid cloud environment including the core internal network infrastructure, virtualized servers, storage and both internal and cloud-based applications and resources. Predictive monitoring services help ensure uptime and increased service levels via proactive and reliable notification of potential issues. Advanced remedial actions should include not only support for rapid response when the circumstances warrant but also for regular customer reviews to discuss longer term configuration changes, additional resource requirements (e.g. storage), policy changes and so on, based on predictive SLA compliance trend reports.

Beyond advanced monitoring, professional skill sets are also very important particularly when it comes to managing new cloud-based applications services. To be able to scale to support larger customers requires tools that can help simplify the complexity of potential issues and speed mean time to resolution. If every complex issue requires the skills of several experts – server, database, network, application etc., it will be hard to scale your organization as your business grows. Having tools that can quickly identify root-causes across network and application layers is vital.

Cloud customers are increasingly relying on their managed service providers to “fix” problems with their cloud services, whether it be performance issues, printing issues, integration issues or whatever. Getting a rapid response from a cloud service provider who has “thousands of other customers who are working just fine” is hard to do, especially as they know problems are as likely caused by customer infrastructure issues as by theirs. Learning tools help MSPs capture and share experiences and turn them into repeatable processes as you address each new customer issue.

Automation is another must have for advance service providers. Again, scalability depends on being able to do as much as possible with as few resources as possible. For infrastructure management, automation can help with service and application monitoring as well as network configuration management. Monitoring with application-aware solutions is an attractive proposition for specific applications. For the rest, it helps to be able to establish performance analytics against key performance indicators and diagnostic monitoring of end-user transaction experiences. For network management, being able to quickly compare network configurations and revert back to earlier working versions is one of the fastest ways to improve mean time to repair. Automating patch management and policy management for desktops and servers also results in significant manpower savings.

Finally, tools which help manage cloud service usage are also invaluable as customers adopt more and more cloud services. Just as on premise license management is an issue for larger organizations so cloud service management, particularly for services such as Office 365, is also an issue. Access to email accounts and collaborative applications such as SharePoint, is not only a security issue it’s also a cost and potentially a performance issue.

Developing advanced services requires a combination of the right skills, resources, processes and technology; in effect, a higher level of organizational maturity. Larger customers will tend to have developed greater levels of process and IT maturity themselves, in order to be able to manage their own growing IT environment. When they turn to a managed service provider for help they will be expecting at least an equivalent level of service provider maturity.

Having the right tools doesn’t guarantee success in the competitive MSP marketplace but can help MSPs create and differentiate more advanced services, demonstrate the ability to support customer-required SLAs, and scale to support larger and more profitable customers.

Learn more about how Kaseya technology can help you create advanced managed service. Read our whitepaper, Proactive Service Level Monitoring: A Must Have for Advanced MSPs.

What tools are you using to create advanced managed services?

Author: Ray Wright

Deliver Higher Service Quality with an IT Management Command Center

IT Mission Control

NASA’s Mission Control Center (MCC) manages very sophisticated technology, extreme remote connections, extensive automation, and intricate analytics – in other words, the most complex world imaginable. A staff of flight controllers at the Johnson Space Center, manages all aspects of aerospace vehicle flights.  Controllers and other support personnel monitor all missions using telemetry, and send commands to the vehicle using ground stations. Managed systems include attitude control and dynamics, power, propulsion, thermal, orbital operations and many other subsystems. They do an amazing job managing the complexity with their command and control center. Of course, we all know how it paid off with the rescue of Apollo 13. For sure all astronauts appreciate the high service quality MCC provides.

The description of what they do sounds a bit like the world of IT management. While maybe not quite operating at the extreme levels of NASA’s command center, today’s IT managers are working to deliver ever higher service quality amid an increasingly complex world. Managing cloud, mobility, and big data combined with the already complex world of virtualization, shared infrastructure and existing applications, makes meeting service level expectations challenging to say the least. And adding people is usually not an option; these challenges need to be overcome with existing IT staff.

As is true at NASA, a centralized IT command center, properly equipped, can be a game changer. Central command doesn’t mean that every piece of equipment and every IT person must be in one location, but it does mean that every IT manager have the tools, visibility and information they need to maximize service quality and achieve the highest level of efficiency.  To achieve this, here are a few key concepts that should be part of everyone’s central command approach:

Complete coverage: The IT command center needs to cover the new areas of cloud (including IaaS, PaaS, SaaS), mobility (including BYOD), and big data, while also managing the legacy infrastructure (compute, network, storage, and clients) and applications. Business services are now being delivered with a combination of these elements, and IT managers must see and manage it all.

True integration: IT managers must be able to elegantly move between the above coverage areas and service life-cycle functions, including discovery, provisioning, operations, security, automation, and optimization. A command center dashboard with easy access, combined with true SSO (Single Sign-On) and data level integration, allows IT managers to quickly resolve issues and make informed decisions.

Correlation and root cause: The ability to make sense of a large amount of alerts and management information is mandatory. IT managers need to know of any service degradation in real-time before a service outage occurs together with the root cause. Mission critical business services are most often based on IT services, so service uptime needs to meet the needs of the business.

Automation: As the world of IT becomes more and more complex, it is impossible to achieve required service levels and efficiency when repetitive, manual, error-prone tasks are still part of the process. IT managers need to automate everything that can be automated! From software deployment and patch management to client provisioning and clean-up, automation is the key to higher service quality and substantial improvement in IT efficiency.

In addition to these requirements for central command, cloud-based IT management is now a very real option. With the growing complexity of cloud, mobility, and big data, along with the ever increasing pace of change, building one’s own command center is becoming more challenging. IT management in the cloud may be the right choice, especially for MSPs and mid-sized enterprises which do not have the resources to continually maintain and update their own IT management environment. Letting the IT management cloud provider keep up with the complexity and changes, while the IT operations team focuses on delivering high service quality, can drive down TCO, improve efficiency, and increase service levels.

Author: Tom Hayes

Building the World’s Fastest Remote Desktop Management – Part 3

ICE (Interactive Connectivity Establishment)

In previous installments of this series, we went over some key technologies used for the new Kaseya Remote Control solution, and some of the features that make it so fast.

But possibly the most important part of getting a fast and reliable Remote Control session is the network connectivity used under the hood. In this post, we cover the types of connectivity used for Kaseya Remote Control, the advantages of each, and how we combine them for additional benefit.

P2P Connectivity

Peer-to-peer connectivity is the preferred method of networking between the viewer application and the agent. It generally offers high throughput and low latency – and because the viewer and agent are connected directly, it places no additional load on the VSA server.

Kaseya Remote Control uses an industry standard protocol called ICE (Interactive Connectivity Establishment) to establish P2P connectivity. ICE is designed to test a wide variety of connection options to find the best one available for the current network environment. This includes TCP and UDP, IPv4 and IPv6, and special network interfaces such as VPN and Teredo.

In addition, ICE takes care of firewall traversal and NAT hole punching. To achieve this, it makes use of the fact that most firewalls and NATs allow reverse connectivity on ports that have been used for outbound connections. This ensures no additional firewall configuration is required to support the new Remote Control solution.

ICE will select the best available P2P connection based on the type of connectivity, how long each connection takes to establish, and a variety of other factors. In practice, this means you will usually get TCP connectivity on local networks, UDP connectivity when crossing network boundaries, and VPN connectivity when no other options are available.

However, testing a wide variety of connectivity options can take several seconds – and in some network environments, it may not be possible to get a P2P connection on any network interface. This brings us to…

Relayed Connectivity

As an alternative to P2P connectivity, Kaseya Remote Control also uses connections relayed through the VSA. Relayed connections are quick to establish and unlikely to be affected by firewalls or NAT devices. They also tend to be more stable over long periods of time, especially relative to P2P connections over UDP.

In practical terms, a relayed connection is made up of outbound TCP connections from the viewer and agent to the VSA, where they are linked up for bidirectional traffic forwarding.

To minimize the network impact, relayed connections from the agent use the same port on the VSA as the agent does for checkins. This means that anytime an agent can check in, it will also be able to establish a relay connection. Conversely, on the viewer side, relayed connections will use the same port on the VSA as the browser uses to view the VSA website: If one works, so will the other.

Combining P2P & Relayed Connectivity

It’s clear that P2P and relayed connectivity both have their distinct advantages, so it would be a shame to settle for just one or the other. To get the best of both worlds, the new Kaseya Remote Control uses both types of connectivity in parallel. In particular:

  • When a new Remote Control session starts, we immediately attempt to establish both types of connectivity.
  • As soon as we get a connection of any type, the session starts. Typically, relayed connectivity will be established first, so we’ll start with that. This results in very quick connection times.
  • With the session now underway, we continue to look for better connectivity options. In most cases, a P2P connection will become available within a few seconds.
  • When a P2P connection is established, the session will immediately switch over from relayed to P2P connectivity. This is totally seamless to the user, and causes no interruption to video streaming or mouse and keyboard events.
  • Even if a P2P connection is established, the relayed connection is maintained for the duration of the session. So if P2P connectivity drops off for any reason, Kaseya Remote Control will seamlessly switch back to the relayed connection, while attempting to establish a new P2P connection in the background.

The upshot of all this is that you can have have fast connection times, high throughput, low latency, and a robust Remote Control connection, all at the same time: No compromises required!

Let Us Know What You Think

The new Desktop Remote Control will be available with VSA 7.0 on May 31. We’re looking forward to getting it into customer hands, and receiving feedback. To learn more about Kaseya and our plans please take a look at our roadmap to see what we have in store for future releases of our products.

Mobile App Management for Ultimate Control of Mobile Devices

Mobile Device Managment

If you ever want to see top notch diligence, watch a first time mom pack her toddler’s lunch box – at least in my house. She packs every snack and meal in separate pouches and in measured quantities. She keeps all the unhealthy high-sugar stuff out and includes all my son’s favorite foods so that he eats well even when she is not around. The only other time I witnessed such meticulous control on what goes, what doesn’t, and how much, was by the IT staff at my previous company that had 100,000+ employees globally.

IT administrators, as caretakers of corporate IT infrastructure, have always wanted a close, tight and centralized control of every piece of IT asset to ensure business continuity, optimal performance, high productivity and bullet-proof security. As the size and complexity of infrastructure increases, the IT management challenges rise exponentially. Companies today extend mobile access to corporate data, along with company-owned mobile devices in many cases. About a decade ago companies trusted BlackBerrys for their secured encrypted access. There weren’t any third party phone apps that users could find and install by themselves on these devices. It was a thick-walled garden back then. But with the advent of Android, iOS and Windows mobile platforms the walls have come down and it has now become a fenceless park within a gated community.

The plethora of apps on each of these mobile platforms has given users the freedom to install any program that they believe will boost productivity. Consequently the line between corporate network and external world is getting blurred. This makes the IT staff, including the CIOs and CISOs nervous as it exposes sensitive corporate data to a potential security breach. The IT staff, therefore, now wants to control and restrict the mobile apps on phones that access corporate systems, data and applications. In the Mobile Device Management (MDM) parlance, this is known as Mobile App Management.

The Mobile App Management (MAM) capability allows IT admins to maintain an inventory of apps installed on all the mobile devices included in the corporate network. It also allows admins to create app catalogs to enforce policy compliance on these distributed mobile end points.

Some of the other key features of MAM include:

  • Remote monitoring of apps installed on devices by
    • Blocking apps through a “Disallowed apps policy” with on screen alerts and compliance view
    • Enforcing required apps through a “Required apps policy” that sends app invites with links and reminds users periodically if they are not in compliance
  • Capability to push enterprise apps to devices remotely
  • Support for App Store applications

When integrated with an enterprise IT management solution that IT admins use to monitor network, servers, desktops, and laptops remotely, such as Kaseya VSA with MDM, MDM/MAM enables centralized control of the IT infrastructure, including mobile workforce, from a single pane of glass. In addition to MAM, Kaseya MDM solution also offers features like loss or theft handling, geo location tracking, device locking, audit and complete device management and configuration. It enables IT admins to treat mobile devices just like computers.

In summary, Mobile App Management is a critical aspect of MDM to ensure security of enterprise data accessed from mobile devices. An MDM solution that integrates with a centralized IT management solution provides a powerful tool for IT staff to command centrally and manage remotely their enterprise IT infrastructure including the mobile workforce.

Author: Varun Taware

Why IT Operations Needs a Comprehensive IT Management Cloud Solution

Performance-related issues are among the hardest IT problems to solve, period. When something is broken alarm bells sound (metaphorically in most cases) and alerts are sent to let IT ops know there’s an issue. But when performance slows for end-user applications there’s often no notification until someone calls to complain. Yet, in reality, losing 10 or 15 seconds every minute for an hour is at least as bad as an hour of downtime in an 8 hour day. At least as bad, and maybe worse – there’s the productivity hit but there’s also significant frustration. At least when the system is really down, users can turn their attention to other tasks while the fixes are installed.

IT Systems Management

One reason why this remains an ongoing issue for many IT organizations is that there are few management tools that provide an overall view of the entire IT infrastructure with the ability to correlate between all of its different components. It’s not that there aren’t individual tools to monitor and manage everything in the system, it’s that coordinating results from these different tools is time consuming and hard to do. There are lots of choices when it comes to server monitoring, desk-top monitoring, application monitoring, network monitoring, cloud monitoring etc., and there are suites of products that cover many of the bases. The challenge is that in most cases these management system components never get fully integrated to the point where the overall solution can find the problem and quickly identify root-cause.

If IT was a static science it’s a good bet that this problem would have been solved a long time ago. But as we know, IT is a hot bed for innovation. New services, capabilities and approaches are released regularly and the immense variety of infrastructure components supporting today’s IT environments make it difficult for monitoring and management vendors to keep up. New management tools appear very frequently too, but the cost and effort of addressing existing infrastructures is often cost-prohibitive for start-ups trying to get their new management capabilities to market quickly.

The complexity and pace of change lead some IT organizations to rely on open source technologies and “freeware” with the benefit that capital costs or operational expenses are kept to a minimum. Yet the results of using such tools are often less than satisfactory. While users can benefit greatly from the community of developers, it’s often hard to get a comprehensive product without buying a commercially supported version. Another issue for open source IT management solutions is that they’re generally not architected to support a large and increasingly complex IT infrastructure – at least in a way that makes it possible to quickly perform sophisticated root-cause analyses. The result is that while the tools may be inexpensive, the time and resources needed to use them can be much greater and their impact less than satisfactory.

IT management is its own “big data” problem.

As IT infrastructure continues to become ever more complex, IT management is becoming its own big data problem. Querying an individual device or server to check status and performance may retrieve only a relatively small amount of data to be sent to the management or monitoring system; a small volume of data but likely a diverse set of information indicating the status of numerous parameters and configuration items. Polling mobile devices and desk-tops, servers, applications, cloud services, hypervisors, routers, switches, firewalls ….generates a whole lot of data, each different item having its unique set of parameters and configurations to retrieve. Polling hundreds, thousands or even tens of thousands of devices every few minutes (so that the management system will be current with device status) can create significant network traffic volume that must be supported without impacting business applications. On top of that the volume of data, the polling frequency, and the resultant velocity of traffic must be accommodated to support storage, trend analysis and real-time processing. System management information is usually stored for months or even years, in order that historical trends and analyses can be performed. But most importantly the management system needs to rapidly process information in real-time to correlate cause and effect, disable downstream alert and alarm conditions and perform predictive analysis so that valid messages can be proactively sent to alert IT ops. Now system management architecture becomes important. Add to that the need for flexibility to accommodate the ever changing IT landscape and management system design to support this “big data application” becomes a critical issue.

This, in part, is why IT management vendors are migrating their solutions to the cloud. As IT infrastructures continue to expand in terms of size, complexity and diversity the ability to support the volumes of data generated and the performance needed to undertake comprehensive root-cause analysis becomes more and more challenging for purely on-premise solutions. In addition, cloud solutions offer the promise of “best practice” advice that can only be derived from a shared-service model, with the caveat that security and privacy remain paramount.

Of course, cloud solutions, with their pay-as-you-use pricing and (on premise) infrastructure free installations are also becoming far more attractive than perpetual licensing arrangements. However, the bottom line is cloud-architected solutions are extremely extensible and able to more quickly and easily accommodate new functionality to the benefit of all users. Not the least of which is the ability to deploy better diagnostic tools and capabilities to support the needs of today’s diverse IT user communities for high levels of systems availability AND performance.

Author: Ray Wright

3 Keys to Managing Today’s Complex IT Infrastructures

IT Management

Many factors contribute to the increasing complexity of business IT environments today; the rapid adoption of cloud computing, big data and mobile device proliferation to name a few. These and other key trends are tho’ are making it harder for organizations to effectively and efficiently manage and secure the environment and to assure IT service levels and business success.

Newer requirements include controlling mobile devices, maintaining visibility into virtualized resources and services, and achieving increasingly demanding SLAs for critical business applications. Effective IT service management becomes more challenging when services are reliant on dynamically shared resources and when some resources are on-premise and some are in a public cloud.

Meeting these challenges has often been labor intensive and costly because the available IT management tools have been narrowly focused and poorly integrated. This post outlines key management concerns and identifies what IT professionals should look for when determining the best approach to addressing management complexity.

Factors Driving Change

Cloud computing, mobility and big data are being adopted by enterprises of all sizes. However delivering the benefits reliably and consistently across a distributed organization often requires a complex combination of infrastructure and support technologies.

Cloud Computing – The cloud is growing in popularity because it provides organizations with faster access to applications and services while reducing the development cost. However, as these applications and services get integrated into existing processes and with existing applications, both the new and the old need to be managed together. New tools are needed to manage cloud infrastructures and applications, but these cannot be separate from the tools managing legacy applications and the existing infrastructure. More tools equal more complexity, increased staffing requirements and lower efficiency.

Mobility – Employees today need to be able to work remotely – from home and from the road. Sometimes they use company owned and provided devices and sometimes they prefer to use their own devices (commonly called Bring Your Own Device – BYOD. Who wants to carry two smart phones these days, supporting separate contact lists and email accounts?) As a result companies have to provide management solutions that address both of these scenarios. For company owned devices, secure access, data back-up and loss prevention are critical issues. For BYOD, the same things are important but it’s also necessary to distinguish between personal files – contact, photo’s, calendar – and business information, so that the loss of the device doesn’t necessarily mean loss of personal information.

Big Data – For most small and mid-sized businesses, refers to working with and obtaining results from the analysis of large and complex data sets provided by Saas companies, third party vendors and service providers. For example, utilizing social media data to identify market opportunities and target prospects. The issue is that big data fosters changes in company operational approaches and increases the number and type of users who need information access. This increases the number of users, the types of devices, the network traffic, the importance of data integrity, the need for system reliability and performance, the volume of stored data, the number of application interactions…. the infrastructure becomes more complex and now carries more information vital to business success. Accordingly system management and the maintenance of service levels become increasingly important too.

Management Impact

The management implication is that the new capabilities need to be managed along with legacy infrastructures and applications in an integrated, automated fashion. Integration is important because new and legacy resources together deliver services to the business. Understanding relationships, dependencies, security, and performance is vital to meeting business service commitments. Automation is important because of the increasing complexity and growing number of management tasks which can no longer be managed with manual approaches.

In fact, IT management can itself become a big a data problem. The scale of data created from IT management systems – the collection of frequently polled device management data, events, logs, etc. is very significant. Real-time analysis and reporting on this data are required to make the actionable decisions necessary to keep the new “hybrid” IT environments performing and meeting needs of the business.

What to look for in an IT management solution

What’s needed is an integrated, comprehensive and cloud-based management tool, with extensive automation capabilities. Tools should meet the following three requirements. Be able to:

  1. Manage cloud infrastructure and application services along with legacy on-premise services with an integrated management system, all within a single command center.
  2. Manage company-owned and employee-owned mobile devices, along with traditional end user clients, as part of an integrated management solution, including the ability to remotely access devices anytime, anywhere.
  3. Automate every manual, repetitive task possible to maximize IT efficiency and reduce human error.

Kaseya’s IT management solution integrates a wide range of management capabilities to enable IT organizations and MSPs to command everything within IT centrally, to manage remote and distributed environments with ease, and to automate all aspects of IT management, delivering higher service quality and greater IT efficiency. Kaseya enables IT professionals to manage all aspects of the IT environment – including on-premise, cloud, hybrid-cloud, virtualized, distributed and mobile components. And Kaseya’s solution itself is delivered via the Kaseya IT management cloud or as on-premise software.

Author: Ray Wright

Page 4 of 45« First...«23456»102030...Last »
-->