You all know the story about the MSP market moving away from break/fix to offer richer services, and becoming a strategic partner.
Being a trusted advisor means you help clients spot the future, find the technologies that offer specific competitive advantage, then pilot, deploy and manage these new technologies.
Trends over the last few years have only made this movement more important. Much of this has to do with the cloud, which is rapidly on the move, and the staying power of on-premises infrastructure. With that, there is a need to manage and integrate both environments, often resulting in hybrid clouds.
Earlier this week, Premier Healthcare, a Bloomington, Indiana-based healthcare provider, reported a data breach that could affect over 200,000 people after a thief stole a laptop containing patient information. While the laptop had password protection, it wasn’t encrypted.
While the bulk of data breaches making the news are the result of phishing or hacks, a large percentage of breaches are caused by straightforward stealing. And while, in this instance, the laptop was taken from a secured onsite billing area (suggesting forethought), many thefts are simply opportunistic — a crash and grab of a laptop left on a car seat, for example.
Whether your company laptops store sensitive healthcare information, private information helpful to identity thieves, or proprietary client information, you need to be prepared to limit exposure in the case of theft.
When a laptop is stolen, you probably worry about these three things right away:
How do you make a hacker happy? Make sure your systems aren’t patched with the latest fixes. Verizon’s 2015 Data Breach Report revealed that 99.9% of vulnerability exploits happen more than a year after the specific vulnerability was reported. In fact, 97% of the attacks in 2014 were from a list of ten published vulnerabilities – and there were patches available to fix those vulnerabilities. Better patch management could have significantly lowered that number. Patch management is critical for IT managers in all industries, but let’s use the healthcare industry as an example.
Security Breaches Are Costly
In a 2015 survey, healthcare executives reported that in the last two years 81% of their healthcare facilities had been compromised – and less than half felt they were properly prepared to prevent future attacks. In 2013 alone, it is estimated that breaches cost healthcare facilities $1.6 billion and affected millions of patients.
When I think back at the security situation at Microsoft back around 2002 when Bill Gates released his famous Trustworthy Computing (TwC) memo, our software industry was frail at best. What followed has been over a decade of improvements in software security and security engineering as a discipline. From process to tools. From attitudes to insights. I have been privileged to be part of that and really learn from some great leaders like Michael Howard and Gary McGraw on the subject.
I am talking about security as in “threats,” not “features.” Kaseya has had a strong history in delivering security features to help increase endpoint security through antivirus, anti-malware, patch management and policy-based IT that hardens the endpoints that we manage. In this post though, I want to introduce you to the work we have been doing inside of Kaseya to focus on the threat landscape by delivering stronger security engineering inside of our company. It is hugely different, and comes down to some core beliefs that has become part of our corporate DNA.
“Simple can be harder than complex: You have to work hard to get your thinking clean to make it simple. But it’s worth it in the end because once you get there, you can move mountains” ~ Steve Jobs.
In this blog article, Don LeClair, Kaseya’s EVP for Product Management, expounded on the Product Design principles we follow here at Kaseya. In today’s post, I am going to discuss a specific design principle – Simplify Everything. I will explain how we used this guiding principle to design Kaseya’s Enterprise Mobility Management (EMM) solution with the sole focus of driving ease-of-use and thus greater value to our customers.
I recently read Verizon’s 2014 Data Breach Investigations Report which investigated 63,437 confirmed security incidents including 1,367 confirmed data breaches across 50 organizations in 95 countries. The public sector had the highest number of security incidents, whereas the finance industry had the highest number of confirmed data breach incidents (no surprise there!). These security incidents were mostly one of the following:
- POS Intrusions
- Web App Attacks
- Physical Theft/Loss
- Miscellaneous Errors
- Card Skimmers
- Cyber Espionage
- DoS Attacks
Given your industry and the size of your company, some of these may not matter to you (until they happen to you). But there are three types of security incidents that are universally applicable, especially in this age of exploding adoption of mobile devices. They are Insider Misuse, Physical Theft/Loss and Miscellaneous Errors. It just takes a single lapse in security measures for an organization, whether public, private or government, to end up in a story like this:
Further elaborating on the “Insider Misuse” threat, the Verizon report adds that over 70 percent of IP theft cases occur within a month of an employee announcing their resignation. Such departing employees mostly steal customer data and internal financial information. This has been made easier for these employees by permitting them to use their personal devices, which walk out with them when they leave.
The percentage of servers which are virtualized continues to grow, but management visibility continues to be a challenge. In this blog post we look at the three key monitoring capabilities – full metal, datastore, and performance – to give you the visibility and control you need to keep your virtualized applications performing well.
Before we start, below is a description of the information models which are important to hypervisor management:
Common Information Model
Common Information Model or CIM is an open standard that defines management and monitoring of devices and elements of devices in a datacenter.
VMWare infrastructure API
The VI API is a proprietary implementation of CIM provided by VMWare for management and monitoring of components related to the VMWare hypervisor.
Full metal monitoring
The fan is essential for proper server function. When rack density goes up, server volume shrinks and fans need to work at higher speeds, which mean more wear and tear. A broken fan in a server can quickly cause major heat build up that affects the server and possibly neighbouring servers. The good thing is that it’s relatively easy to monitor the state of the fans. The CIM_fan class exposes a property called HealthState that contains information about the health of a fan: OK, degraded state, or failed.
Power supply health is important to monitor. Most enterprise servers can be configured to have redundant power supplies. In addition, its good to have a spare in backup. OMC_Powersupply is a class the exposes the “HealthState” property for each PSU in your server. Just like the fan health, the PSU is determined to be OK, degraded, or failed.
VI API can be used to measure average power usage, which gives an indication of the server utility cost. More power usage means more heat, which equals even more utility costs in the form of heat dissipation.The VI API counter power.power.average results looks like this:
Raid controller, storage volume and battery backup
Three key storage elements that you should monitor are the raid controller, storage volumes and the battery. The controller and disks seems obvious, but the battery? In many cases a high performance raid controller will have a battery to backup the onboard memory incase of a power outage. The memory on the controller is most commonly used for write back caching and when the server loses power, the battery ensures that the cache remains consistent until you restore power to the server and its content can be written to disk.
Utilization, IOPs and latency are metrics that should be monitored and analyzed together. When you have performance problems in a disk subsystem, an “ok” latency can tell you to go and look for problems with IOPs. High utilization can tell you why you may not get the expected IOPs out of the system and so on.
The utilization can be calculated using the capacity and freespace properties of the DatastoreSummary object.
IO operations per second can be monitored using a VI API datastore.datastoreIops.average counter; which provides an average of read and write io operations.
Latency can be measured using datastore.totalWriteLatency.average and datastore.totalReadLatency.average counters. They will show you average read and write latency for the whole chain, which includes both kernel and device latency.
Threads scheduled to run on a CPU can either be in two states: waiting or ready. Both of these states can tell a story about resource shortage. The lesser evil of the two is the wait state, which indicates that the thread is waiting for an IO operation to complete. This can be as simple as waiting for a answer on a host external resource, or waiting on disk time. The more serious state is the so called “ready state” which indicate that the thread is ready to run, but there is no CPU free to server it.
Memory ballooning and IOPS
Memory ballooning is a process that can happen when a host experiences a low memory condition and probes the virtual machines for memory to free up. The balloon driver in each VM tries to allocate as much memory as possible within the VM (up to 65% of the available VM memory), and the host will free this memory to add to the host memory pool.
The memory ballooning counter, mem.vmmemctl.average, can show when this happens. So how can memory ballooning make a dent in your IO graph you may ask? After the host reconciles memory from VMs, these VMs can start to use their own virtual memory and start page memory blocks to disk, which is why memory ballooning may proceed a higher than normal IO operation.
Ballooning may happen even if there is no issue; its a strategy for the host to make sure there is free memory for any VM to consume. Host swapping however is always a sign of trouble. There are a number of counters that you want to monitor:
These counters show, both in cumulative and in rate, how much memory is swapped in/out. Host memory swapping is double trouble. No only does it indicate that you have a low host memory situation, it is also going to affect IO performance.
Monitoring, reporting and notification on all these metrics can be a challenge. The good news for Kaseya customers is that you can implement the monitoring described in this article using the new network monitor module in VSA 7.0, available now.
- VI API reference
- CIM reference
- CIM monitor – used to monitor hardware and SAN
- Creating a local read-only user for VMware ESXi CIM monitoring
- Datastore monitoring
- VMWare performance monitor
Author: Robert Walker
In his earlier blog post, Chad Gniffke outlined some of the key technologies underpinning the new Kaseya Remote Control solution in VSA release 7.0. This includes using modern video codecs to efficiently transmit screen data.
Going beyond these items, the engineering team at Kaseya has looked at every aspect of the remote desktop management workflow to shave precious seconds off the time required to establish a session.
In this post, we review three changes that have a big impact on both the time to connect and the experience once connected.
Keeping it Lean
When it comes to performance, what’s sometimes more important than what you do is what you don’t do. In the new Kaseya Remote Control, we have applied this principle in several areas.
When first connecting to a new agent, downloading remote desktop management binaries to the agent will represent a substantial portion of the connect time. With the new Kaseya Remote Control in VSA 7.0, this delay has been completely eliminated: Everything needed for Remote Control is now included with the agent install itself, and is always available for immediate use.
Likewise, the time to schedule and run an agent procedure against an agent has traditionally accounted for a large portion of the time to connect. Rather than attempt to optimize this, the new Remote Control doesn’t run an agent procedure at all. Instead, it maintains a persistent connection to the VSA server over a dedicated network connection that’s always on, and always available to start Remote Control immediately.
Making it Parallel
Establishing a Remote Control session involves a surprising number of individual steps. In broad strokes, we need to:
- Launch the viewer application.
- Establish a connection from the viewer to the VSA server.
- Perform encryption handshakes to ensure each connection is secure.
- Send Remote Control session details to the agent.
- Wait for the user to accept the Remote Control session (if required by policy).
- Establish relayed connectivity.
- Collect network candidates for P2P connectivity.
- Transmit P2P connection candidates over the network.
- Perform P2P connectivity tests.
- Select the best available network connection to start the session on.
But it turns out most of these steps can be performed in parallel – at least to some degree. For example, the information required to start a P2P connection to an agent can be collected while establishing an encrypted connection to the VSA. If user acceptance is required, a complete P2P connection can usually be negotiated long before the user approves the session. This dramatically reduces the overall time required to establish each session.
Utilizing the Hardware
Once connected to a remote agent, Kaseya Remote Control will start streaming full screen video data over the network connection, and drawing it to the viewer’s screen. The video codec under the hood ensures that a minimal amount of data is sent over the network, especially if nothing much is changing on screen. But on the viewer side, we still need to render the entire remote image to screen, at up to 20 frames per second. This can result in increased CPU load and battery drain on the viewer machine.
To reduce the impact on the viewer side, the new Kaseya Remote Control in VSA 7.0 now uses graphics hardware to scale and render the remote video content to screen. Modern graphics cards can perform these operations very efficiently, resulting in a reduced drain on system resources. This will be especially obvious when maintaining long-running connections to multiple remote agents.
These items represent a handful of the many changes going into our new Kaseya Remote Control to speed up connect times, and improve the experience once connected.
To find out more, stop by the Product Lab at Kaseya Connect in Las Vegas next week! And watch this space for a future post about the brand new P2P connection establishment technology that forms the backbone of our next generation Kaseya Remote Control.
The ability to efficiently manage remotely is a primary driver selecting an IT Management solution. For the last few years, we have relied heavily on browser-side plugins to deliver real-time remote control and behind the scenes remote desktop management tools via our Live Connect platform. Browser-side plugins have long provided us with an efficient way to deliver feature rich web applications that work across all major browser platforms. This efficiency is now in jeopardy due to changing attitudes on enabling plugins.
Browser manufacturers are releasing updated versions of their platforms on a more rapid cadence (Chrome and Firefox release new versions every 6 weeks). Over the last year, these updates have introduced new restrictions on plugins that have resulted in unplanned code changes and prolonged loss of functionality in some circumstances.
Installing and uninstalling plugins reliably has also increased in complexity. This aspect alone has made supporting plugin applications more difficult for software companies and even more frustrating for our users. In the case of Remote Control here at Kaseya, this is an important reliability issue that can no longer be compromised.
Browser manufacturers are becoming more like lightweight operating systems and are beginning to move in different directions. For example, Firefox is starting to move away from plugins altogether. Chrome is developing a proprietary plugin API which locks all plugin-based apps within their browser sandbox. This is primarily a security move by Google. Internet Explorer and Safari have not yet made a move on their plugin support, but we anticipate that this will happen soon.
So, the reports of the death of browser plugins may be an exaggeration, but not by very much. It doesn’t matter which browser you prefer, by the end of this year there are going to be very few opportunities for consistent plugin-based applications such as remote control.
So, if we can’t leverage browser plugins to deliver a great remote control experience where do we go next? The answer, oddly enough, is to return to an installed native application. An installed application provides us the ability to deliver a high performance solution with the ability capture and operate any aspect of a graphical operating system remotely.
You may be thinking that an installed app can solve these problems, but it brings back a host of headaches around distribution, installation, and communications that drove us to use browser plugins in first place. In addition, we want to continue to deliver an integrated management solution that includes remote control and the full breadth of other management capabilities for flexibility and efficiency.
Happily, Kaseya is in the position to help you succeed with remote desktop management in the post-plugin world. We are delivering a browser-free, highly performant installed application that connects in under six seconds to machines anywhere in the world. We can even do this over high latency, low bandwidth connections. We simplify the client-side deployment challenge by efficiently deploying the installed application with our agent so all the software you need to manage the endpoint, including remote control, is already there. Then finally we leverage a URI handler to seamlessly invoke the remote control capabilities from the comfort of the same browser the technical team uses to manage the rest of your environment.
Seem hard to believe? The coming out party for these capabilities will be at Kaseya Connect on April 14th. Stay tuned for more information, or better yet, register today for Kaseya Connect and come see it for yourself!