Archive - Products RSS Feed

Why is it called “Multi-Factor” Authentication? (MFA)

MFA Steps

Why is multi-factor authentication (MFA) “multi-factor” anyways? A simple enough question, right? Well, it’s not as simple as it sounds.

Depending on where you look, you can see references to two-factor authentication, three-factor authentication, strong authentication, advanced authentication. Based on the name, it sounds like these are all just subcategories of multi-factor authentication. Unfortunately, that’s only half true, and that’s also where this question gets complicated.

Which types of authentication are always examples of multi-factor authentication?

Two and three factor authentication are always examples of multi-factor authentication. Multi-factor definition, by definition, is authentication using at least 2 of the 3 possible authentication factors. So yes, two-factor and three-factor authentication are both examples of multi-factor authentication.

What about “strong” and “advanced” authentication?

This is where it gets tricky. Both strong and advanced authentication in use can be considered multi-factor authentication; however, it depends on how the authentication is implemented. To understand what I mean we first need to define what multi-factor authentication is.

What is multi-factor authentication?

The term “authentication” refers to the ability to verify the identity of a person attempting to access a system (presumably someone who is authorized to access that system). The term “factor” then, necessarily refers to the different types of tests someone must successfully complete to identify themselves. For IT security, these factors often filter down into three broad categories:

  • Knowledge: Something you know.

    This is the factor upon which password-only systems rely. To pass a knowledge factor based test, you must prove that you know a secret combination, like a password, PIN, or pattern.

  • Possession: Something you have.

    To authenticate using this factor, you must prove you possess something that only you should have, like a key, or an ID card.

  • Inherence: Something you are.

    Inherence means something that is inherently yours. That usually means a unique physical or behavioral characteristic, tested through some sort of biometric system.

Multi-factor authentication requires a system use at least two of these authentication factors to authenticate users. That’s why it’s “multi-factor” authentication.

Wait… so what was that about “strong” and “advanced” authentication?

Well, multi-factor authentication requires at least two factors be used. Both advanced and strong authentication can use two or three factors; however, the requirements do not require the use of “tests” from different categories. Strong authentication could be achieved by using a password and a security question, while advanced authentication could established with a password and a challenge question. This means that, while all multi-factor authentication solutions count as strong or advanced authentication, not all strong and advanced authentication solutions count as multi-factor authentication.

Why do businesses need multi-factor authentication?

Many groups feel that single-factor authentication is adequate for their needs, but let’s consider something first. You have a bank account, and tied to that bank account you likely have both a debit and a credit card. To access your money you already use multi-factor authentication. You have a debit/credit card (possession), and a pin code/ password (knowledge). Now, consider how much the damage a breach could cost your business. Does your business’ network deserve the same level of protection as your personal bank account, if not more?

Yes, yes it does.

Many industries already require multi-factor authentication! If you work in law enforcement in the United States, then you’re likely required to be CJIS compliant. CJIS compliance requires advanced authentication. If you work in retail, you’re likely PCI compliant. Again, PCI compliance requires multi-factor authentication. If you work in healthcare, then there’s HIPAA to consider. HIPAA is yet another regulation that requires multi-factor authentication. What this demonstrates is that, for IT security, MFA is becoming mainstream.

What’s my recommendation for a multi-factor authentication solution?

Well, no solution should be a one size fits all response. You should be able to customize and tailor any potential solution so that vital resources are protected, without inconveniencing users who don’t require multi-factor authentication. If you’re interested a solution designed from the ground up with security and usability in mind, then I’d recommend “AuthAnvil Two Factor Auth”.

AuthAnvil Two Factor Auth is a multifactor authentication server capable of adding identity assurance protection to the servers and desktops you need to interact with on a regular basis, and deep integration into many of the tools that you may use day to day. It also works with pretty much anything that supports RADIUS, so along with your Windows logon it can protect things like your VPNs, firewalls and Unix environments. Conveniently enough, it also integrates smoothly with Kaseya. That way you can accomplish even more from that single pane of glass.

For more information on multi-factor authentication: Click Here

For a look at how much AuthAnvil’s Kaseya integration can be used: Click Here

Author: Harrison Depner

Remotely Manage and Control Your Ever-Widening IT Environment

Big Bang Theory
According to the “Big Bang” cosmological theory, and the latest calculations from NASA, the universe is expanding at a rate of 46.2 miles (plus or minus 1.3 miles) per second per megaparsec (a megaparsec is roughly 3 million light-years). If those numbers are a little too much to contemplate, rest assured that’s really, really fast. And it’s getting faster all the time.

Does this sound a bit like the world of IT management? So maybe IT environments aren’t expanding at 46.2 miles per second per megaparsec, but with cloud services, mobile devices, and increasingly distributed IT environments, it feels like it. Things that need to be managed are further away and in motion, which means that the ability to manage remotely is crucial. IT operations teams must be able to connect to anything, anywhere to perform a full range of management tasks.

The list of things that need to be managed remotely continues to grow. Cloud services, mobile devices, new things (as in the “Internet of Things”) all need to be managed and controlled. To maximize effectiveness, remote management of this comprehensive set should be done within a single command center. Beyond central command, several core functions are needed to successfully manage remotely:

Discovery: Virtually every management vendor offers discovery, but all discovery is not created equal. The discovery capability must be able to reach every device, no matter where it is located – office, home, on-the-road, anywhere in the world. It must also be an in-depth discovery, providing the device details needed for proper management.

Audit and Inventory: It is important to know all the applications that are running on servers, clients, and mobile devices. Are they at the right version level? And for corporately controlled devices (that is, not BYOD devices), are the applications allowed at all? Enforcing standards helps reduce trouble tickets generated when problems are caused by untested/unauthorized applications. A strong auditing and inventory capability informs the operations team so the correct information can be shared and the right actions taken.

Deploy, Patch and Manage Applications: Software deployment, patch, and application management for all clients is key to ensuring users have access to the applications they need, with a consistent and positive experience. With the significant growth in mobility, the capability to remotely manage mobile devices to ensure secure access to chosen business applications, with either company-owned devices or BYOD devices, is also important. Arguably mobile devices are more promiscuous in their use of unsecure networks in coffee shops and airports etc., so it’s even more important to keep up with mobile device patch management to ensure security fixes are put in place as soon as possible.

Security: Protecting the outer layer network, the endpoints, is an important component to a complete security solution. Endpoint protection starts with a strong endpoint security and malware detection and prevention engine. Combine security with patch management to automatically keep servers, workstations and remote computers up-to-date with the latest, important security patches and updates.

Monitor: Remote monitoring of servers, workstations, remote computers, Windows Event Logs, and applications is critical to security, network performance and the overall operations of the organization. Proactive, user defined monitoring with instant notification of problems or changes — when critical servers go down, when users alter their configuration or when a possible security threat occurs — is key to keeping systems working well and the organization running efficiently.

Remote Control: IT professionals frequently need directand rapid access to servers, workstations and mobile devices securely and without impacting the productivity of the users in order to quickly remediate issues . Remote control capability must deliver a complete, fast and secure remote access and control solution even behind firewalls or from machines at home. Because seconds matter, remote control should provide near instantaneous connect times with excellent reliability, even over high latency networks.

Automation: As the world of IT becomes more and more complex, it is impossible to achieve required service levels and efficiency when repetitive, manual, error-prone tasks are still part of the process. IT managers need to automate everything that can be automated! From software deployment and patch management to client provisioning and clean-up, automation must be an integral part of a remote management solution.

Choosing management tools with strong remote management capabilities is important to achieving customer satisfaction goals, and doing more with the existing IT operations staff. Learn more about how Kaseya technology can help you remotely manage your increasingly complex IT services. Read our whitepaper, Managing the Complexity of Today’s Hybrid IT Environments.

Author: Tom Hayes

Using IT Management Tools to Deliver Advanced Managed Services

IT Management Tools

The managed service marketplace continues to change rapidly, particularly as more and more businesses are adopting cloud-based services of one type or another. The cloud trend is impacting managed service providers in numerous ways:

  • Cloud migration increases the potential for consulting projects to manage the transition but reduces the number of in-house servers/devices that need to be monitored/managed, impacting retainer-based revenues.
  • Service decisions are more heavily influenced by line-of-business concerns as customers shift spending from capital to operating expenses.
  • Service providers need be able to monitor and manage application-related service level agreements, not just infrastructure components.
  • Managing cloud-based applications is less about internal cloud resources and their configuration (which are typically controlled by the cloud provider) and more about access, resource utilization and application performance.

To address these challenges and be able to compete in a marketplace where traditional device-based monitoring services are becoming commoditized, MSPs must create and sell more advanced services to meet the changing needs of their customer base. The benefits of delivering higher value services include greater marketplace differentiation, bigger and more profitable deals, and greater customer satisfaction.

Advanced services require the ability to deploy a proactive service level monitoring system that can monitor all aspects of a customer’s hybrid cloud environment including the core internal network infrastructure, virtualized servers, storage and both internal and cloud-based applications and resources. Predictive monitoring services help ensure uptime and increased service levels via proactive and reliable notification of potential issues. Advanced remedial actions should include not only support for rapid response when the circumstances warrant but also for regular customer reviews to discuss longer term configuration changes, additional resource requirements (e.g. storage), policy changes and so on, based on predictive SLA compliance trend reports.

Beyond advanced monitoring, professional skill sets are also very important particularly when it comes to managing new cloud-based applications services. To be able to scale to support larger customers requires tools that can help simplify the complexity of potential issues and speed mean time to resolution. If every complex issue requires the skills of several experts – server, database, network, application etc., it will be hard to scale your organization as your business grows. Having tools that can quickly identify root-causes across network and application layers is vital.

Cloud customers are increasingly relying on their managed service providers to “fix” problems with their cloud services, whether it be performance issues, printing issues, integration issues or whatever. Getting a rapid response from a cloud service provider who has “thousands of other customers who are working just fine” is hard to do, especially as they know problems are as likely caused by customer infrastructure issues as by theirs. Learning tools help MSPs capture and share experiences and turn them into repeatable processes as you address each new customer issue.

Automation is another must have for advance service providers. Again, scalability depends on being able to do as much as possible with as few resources as possible. For infrastructure management, automation can help with service and application monitoring as well as network configuration management. Monitoring with application-aware solutions is an attractive proposition for specific applications. For the rest, it helps to be able to establish performance analytics against key performance indicators and diagnostic monitoring of end-user transaction experiences. For network management, being able to quickly compare network configurations and revert back to earlier working versions is one of the fastest ways to improve mean time to repair. Automating patch management and policy management for desktops and servers also results in significant manpower savings.

Finally, tools which help manage cloud service usage are also invaluable as customers adopt more and more cloud services. Just as on premise license management is an issue for larger organizations so cloud service management, particularly for services such as Office 365, is also an issue. Access to email accounts and collaborative applications such as SharePoint, is not only a security issue it’s also a cost and potentially a performance issue.

Developing advanced services requires a combination of the right skills, resources, processes and technology; in effect, a higher level of organizational maturity. Larger customers will tend to have developed greater levels of process and IT maturity themselves, in order to be able to manage their own growing IT environment. When they turn to a managed service provider for help they will be expecting at least an equivalent level of service provider maturity.

Having the right tools doesn’t guarantee success in the competitive MSP marketplace but can help MSPs create and differentiate more advanced services, demonstrate the ability to support customer-required SLAs, and scale to support larger and more profitable customers.

Learn more about how Kaseya technology can help you create advanced managed service. Read our whitepaper, Proactive Service Level Monitoring: A Must Have for Advanced MSPs.

What tools are you using to create advanced managed services?

Author: Ray Wright

Deliver Higher Service Quality with an IT Management Command Center

IT Mission Control

NASA’s Mission Control Center (MCC) manages very sophisticated technology, extreme remote connections, extensive automation, and intricate analytics – in other words, the most complex world imaginable. A staff of flight controllers at the Johnson Space Center, manages all aspects of aerospace vehicle flights.  Controllers and other support personnel monitor all missions using telemetry, and send commands to the vehicle using ground stations. Managed systems include attitude control and dynamics, power, propulsion, thermal, orbital operations and many other subsystems. They do an amazing job managing the complexity with their command and control center. Of course, we all know how it paid off with the rescue of Apollo 13. For sure all astronauts appreciate the high service quality MCC provides.

The description of what they do sounds a bit like the world of IT management. While maybe not quite operating at the extreme levels of NASA’s command center, today’s IT managers are working to deliver ever higher service quality amid an increasingly complex world. Managing cloud, mobility, and big data combined with the already complex world of virtualization, shared infrastructure and existing applications, makes meeting service level expectations challenging to say the least. And adding people is usually not an option; these challenges need to be overcome with existing IT staff.

As is true at NASA, a centralized IT command center, properly equipped, can be a game changer. Central command doesn’t mean that every piece of equipment and every IT person must be in one location, but it does mean that every IT manager have the tools, visibility and information they need to maximize service quality and achieve the highest level of efficiency.  To achieve this, here are a few key concepts that should be part of everyone’s central command approach:

Complete coverage: The IT command center needs to cover the new areas of cloud (including IaaS, PaaS, SaaS), mobility (including BYOD), and big data, while also managing the legacy infrastructure (compute, network, storage, and clients) and applications. Business services are now being delivered with a combination of these elements, and IT managers must see and manage it all.

True integration: IT managers must be able to elegantly move between the above coverage areas and service life-cycle functions, including discovery, provisioning, operations, security, automation, and optimization. A command center dashboard with easy access, combined with true SSO (Single Sign-On) and data level integration, allows IT managers to quickly resolve issues and make informed decisions.

Correlation and root cause: The ability to make sense of a large amount of alerts and management information is mandatory. IT managers need to know of any service degradation in real-time before a service outage occurs together with the root cause. Mission critical business services are most often based on IT services, so service uptime needs to meet the needs of the business.

Automation: As the world of IT becomes more and more complex, it is impossible to achieve required service levels and efficiency when repetitive, manual, error-prone tasks are still part of the process. IT managers need to automate everything that can be automated! From software deployment and patch management to client provisioning and clean-up, automation is the key to higher service quality and substantial improvement in IT efficiency.

In addition to these requirements for central command, cloud-based IT management is now a very real option. With the growing complexity of cloud, mobility, and big data, along with the ever increasing pace of change, building one’s own command center is becoming more challenging. IT management in the cloud may be the right choice, especially for MSPs and mid-sized enterprises which do not have the resources to continually maintain and update their own IT management environment. Letting the IT management cloud provider keep up with the complexity and changes, while the IT operations team focuses on delivering high service quality, can drive down TCO, improve efficiency, and increase service levels.

Author: Tom Hayes

Building the World’s Fastest Remote Desktop Management – Part 3

ICE (Interactive Connectivity Establishment)

In previous installments of this series, we went over some key technologies used for the new Kaseya Remote Control solution, and some of the features that make it so fast.

But possibly the most important part of getting a fast and reliable Remote Control session is the network connectivity used under the hood. In this post, we cover the types of connectivity used for Kaseya Remote Control, the advantages of each, and how we combine them for additional benefit.

P2P Connectivity

Peer-to-peer connectivity is the preferred method of networking between the viewer application and the agent. It generally offers high throughput and low latency – and because the viewer and agent are connected directly, it places no additional load on the VSA server.

Kaseya Remote Control uses an industry standard protocol called ICE (Interactive Connectivity Establishment) to establish P2P connectivity. ICE is designed to test a wide variety of connection options to find the best one available for the current network environment. This includes TCP and UDP, IPv4 and IPv6, and special network interfaces such as VPN and Teredo.

In addition, ICE takes care of firewall traversal and NAT hole punching. To achieve this, it makes use of the fact that most firewalls and NATs allow reverse connectivity on ports that have been used for outbound connections. This ensures no additional firewall configuration is required to support the new Remote Control solution.

ICE will select the best available P2P connection based on the type of connectivity, how long each connection takes to establish, and a variety of other factors. In practice, this means you will usually get TCP connectivity on local networks, UDP connectivity when crossing network boundaries, and VPN connectivity when no other options are available.

However, testing a wide variety of connectivity options can take several seconds – and in some network environments, it may not be possible to get a P2P connection on any network interface. This brings us to…

Relayed Connectivity

As an alternative to P2P connectivity, Kaseya Remote Control also uses connections relayed through the VSA. Relayed connections are quick to establish and unlikely to be affected by firewalls or NAT devices. They also tend to be more stable over long periods of time, especially relative to P2P connections over UDP.

In practical terms, a relayed connection is made up of outbound TCP connections from the viewer and agent to the VSA, where they are linked up for bidirectional traffic forwarding.

To minimize the network impact, relayed connections from the agent use the same port on the VSA as the agent does for checkins. This means that anytime an agent can check in, it will also be able to establish a relay connection. Conversely, on the viewer side, relayed connections will use the same port on the VSA as the browser uses to view the VSA website: If one works, so will the other.

Combining P2P & Relayed Connectivity

It’s clear that P2P and relayed connectivity both have their distinct advantages, so it would be a shame to settle for just one or the other. To get the best of both worlds, the new Kaseya Remote Control uses both types of connectivity in parallel. In particular:

  • When a new Remote Control session starts, we immediately attempt to establish both types of connectivity.
  • As soon as we get a connection of any type, the session starts. Typically, relayed connectivity will be established first, so we’ll start with that. This results in very quick connection times.
  • With the session now underway, we continue to look for better connectivity options. In most cases, a P2P connection will become available within a few seconds.
  • When a P2P connection is established, the session will immediately switch over from relayed to P2P connectivity. This is totally seamless to the user, and causes no interruption to video streaming or mouse and keyboard events.
  • Even if a P2P connection is established, the relayed connection is maintained for the duration of the session. So if P2P connectivity drops off for any reason, Kaseya Remote Control will seamlessly switch back to the relayed connection, while attempting to establish a new P2P connection in the background.

The upshot of all this is that you can have have fast connection times, high throughput, low latency, and a robust Remote Control connection, all at the same time: No compromises required!

Let Us Know What You Think

The new Desktop Remote Control will be available with VSA 7.0 on May 31. We’re looking forward to getting it into customer hands, and receiving feedback. To learn more about Kaseya and our plans please take a look at our roadmap to see what we have in store for future releases of our products.

Users Are Going Mobile And So Should Your IT Management

mobile device

Circa 2003: I needed to print sensitive corporate data for a client meeting the next morning. Those files were stored on a remote corporate server. I logged on to a company desktop, connected to the corporate LAN and printed the files. The next morning, I realized I got the wrong versions. I hurried back to the office, logged on to the desktop and printed the correct files. I barely made it in time for the client meeting and forgot my glasses in the cab as I reviewed the content on the way. Around this time, BlackBerry launched their first smartphones capable of email and web surfing. By the end of 2003, mobile Internet users were virtually nonexistent.

Circa June 2007: Enter Steve Jobs with the iPhone, which completely redefined smartphones as pocket-sized mini-computers. By the end of 2007, there were 400 million mobile internet users globally.

Today (2014): Got a smartphone…check. Got a tablet…check. Got iOS and Android devices…check and check. Setup office applications on them…done. Need to look up corporate files? Wait a minute…and done! Today, there are more than 1.5 billion mobile internet users in the world and very soon they will surpass the internet users on desktops.

Since 2007: the adoption of internet-capable smartphones has been stupendous. Almost every corporate employee today owns a smartphone for personal and/or office use. Mobile access to corporate information boosts business productivity (except when you are busy checking Facebook). This in turn helps increase job satisfaction of employees and keeps the company agile and responsive to business needs on the go. This is the essence of workforce mobility. But, in order to be future-proof, let’s not misunderstand mobility as the mere use of mobile phones to access data. The definition of “mobile” in this context should entail any computing device, capable of wireless data communication, that moves around with you (i.e. smartphone, tablet, smart watch, or google glass — if that ever takes off). And, who knows we may have the “smart paper” coming up soon.

This proliferation of mobile devices, in volume and in platform diversity, increases the challenges for IT management. The higher the number of endpoints that access enterprise data, the greater is the exposure to security risks.

The rapid adoption of mobile devices drives two important trends for the IT management staff, namely:

Continue Reading…

Building The World’s Fastest Remote Desktop Management – Part 2

Kaseya Remote Control

In his earlier blog post, Chad Gniffke outlined some of the key technologies underpinning the new Kaseya Remote Control solution in VSA release 7.0. This includes using modern video codecs to efficiently transmit screen data.

Going beyond these items, the engineering team at Kaseya has looked at every aspect of the remote desktop management workflow to shave precious seconds off the time required to establish a session.

In this post, we review three changes that have a big impact on both the time to connect and the experience once connected.

Keeping it Lean

When it comes to performance, what’s sometimes more important than what you do is what you don’t do. In the new Kaseya Remote Control, we have applied this principle in several areas.

When first connecting to a new agent, downloading remote desktop management binaries to the agent will represent a substantial portion of the connect time. With the new Kaseya Remote Control in VSA 7.0, this delay has been completely eliminated: Everything needed for Remote Control is now included with the agent install itself, and is always available for immediate use.

Likewise, the time to schedule and run an agent procedure against an agent has traditionally accounted for a large portion of the time to connect. Rather than attempt to optimize this, the new Remote Control doesn’t run an agent procedure at all. Instead, it maintains a persistent connection to the VSA server over a dedicated network connection that’s always on, and always available to start Remote Control immediately.

Making it Parallel

Establishing a Remote Control session involves a surprising number of individual steps. In broad strokes, we need to:

  • Launch the viewer application.
  • Establish a connection from the viewer to the VSA server.
  • Perform encryption handshakes to ensure each connection is secure.
  • Send Remote Control session details to the agent.
  • Wait for the user to accept the Remote Control session (if required by policy).
  • Establish relayed connectivity.
  • Collect network candidates for P2P connectivity.
  • Transmit P2P connection candidates over the network.
  • Perform P2P connectivity tests.
  • Select the best available network connection to start the session on.

But it turns out most of these steps can be performed in parallel – at least to some degree. For example, the information required to start a P2P connection to an agent can be collected while establishing an encrypted connection to the VSA. If user acceptance is required, a complete P2P connection can usually be negotiated long before the user approves the session. This dramatically reduces the overall time required to establish each session.

Utilizing the Hardware

Once connected to a remote agent, Kaseya Remote Control will start streaming full screen video data over the network connection, and drawing it to the viewer’s screen. The video codec under the hood ensures that a minimal amount of data is sent over the network, especially if nothing much is changing on screen. But on the viewer side, we still need to render the entire remote image to screen, at up to 20 frames per second. This can result in increased CPU load and battery drain on the viewer machine.

To reduce the impact on the viewer side, the new Kaseya Remote Control in VSA 7.0 now uses graphics hardware to scale and render the remote video content to screen. Modern graphics cards can perform these operations very efficiently, resulting in a reduced drain on system resources. This will be especially obvious when maintaining long-running connections to multiple remote agents.

Diving Deeper

These items represent a handful of the many changes going into our new Kaseya Remote Control to speed up connect times, and improve the experience once connected.

To find out more, stop by the Product Lab at Kaseya Connect in Las Vegas next week! And watch this space for a future post about the brand new P2P connection establishment technology that forms the backbone of our next generation Kaseya Remote Control.

Building The World’s Fastest Remote Desktop Management: Part 1

In 2004 I installed my first Kaseya Agent and launched my first Kaseya remote desktop management session. Never before had I been able to remotely access a NAT’d device without a mapped IP.  It connected in 60-180 seconds, most of the time, and I was awestruck!

Fast forward 10 years and the game has changed -and so must we.  On May 30th, 2014 we will release our new Kaseya Remote Control solution as part of VSA 7.0.

Kaseya Remote Control

The focus for this project has been on 3 key objectives:

  • Connect in 6 seconds
  • Connect 99% of the time
  • Perform well over latent connections

To meet these objectives, fundamental changes were required.   VSA 7.0 brings new code, new methods, and new technology to the product.

Agent to Server Persistence

The VSA Server and Agent have a small yet powerful addition in 7.0. These additions establish a lightweight, always-on connection between server and agent.  Because of this persistent connection, commands can be sent, received and responded to in milliseconds.  In the case of Remote Control, the admin can initiate a remote access request and receive a response from the agent in near real-time.  This addition increases the time-to-connect speed substantially!

Non-Sequential Communication Channels

Current Kaseya technology communicates in a very sequential manner, meaning there is a “wait in line” restriction that may slow down on-demand requests. With the new communication layer, remote control requests no longer get queued behind other activities on the agent. In addition, we now perform all activities required to set up a remote control session in parallel. For example, if end user permission is required to start a session, connectivity will be established behind the scenes in the meantime, so it’s ready to go when the user is.

Video Codec and Hardware Acceleration

Unlike many competitors’ solutions, Kaseya Remote Control is built using a video codec.  Large video streaming firms like Netflix and Hulu use similar codecs to send hi-def movies to your computer.   With the advent of these services, innovation in video codec technology continues to grow.  Most has been focused on increasing video quality (i.e. hi-def) while reducing the bandwidth required.  As the demand for 4K video increases, this technology will only become more efficient.  Leveraging these efficiencies will only increase our performance.

Leveraging hardware acceleration is another area of opportunity.  By using well known video codec technology it is possible to leverage the GPU, to perform hardware acceleration, on both the admin and endpoint.  This drives increased performance and responsiveness.  Additionally, with more people watching video on their mobile devices, it is possible to leverage mobile GPU’s,  therefore ensuring the user experience is never compromised, no matter the device.

And We Are Just Getting Started

The best thing about this remote desktop management project is that we are just getting started.  We believe it will be faster, more reliable and perform well under any circumstances.  If you want a sneak peek at our new Kaseya Remote Control and you are an existing Kaseya customer, come join us in Las Vegas for Kaseya Connect, our annual user conference.  We hope to see you there.

Stay tuned for Part 2 as we take a deeper look at the technology behind Kaseya Remote Control.

Page 2 of 2«12
-->