Archive - Products RSS Feed

Day In the Life of a System Administrator – What Would You Do with More Free Time?

time-in-day
Do you pass the shower test in the morning? No, this isn’t referring to how clean you are – it references your attitude about your job.

Are you an IT manager or systems administrator who wakes up in the morning eager to start the day? Do you think about how good the day is going to be while taking your morning shower? Are you excited to get to work? If not, there’s a way to recover the magic and pass the morning shower test. Let’s look at a typical day-in-the-life in IT operations.

Why You Might Not Pass the Test

As an IT manager, you have a tremendous amount of responsibility in planning and directing the IT systems activities in your company. Installation, maintenance, and oversight for every piece of hardware and software fall into your lap. On good days, it’s a challenge. On typical days, it can be a migraine in the making.

Continue Reading…

Product Design in the IT Management Cloud Era

Product Design Image

I think many of us are more aware of the impact of product design than ever before. Recently, you may recall that the rounded edges of the iPhone 6 were widely considered newsworthy; even with mainstream television media! Apple has a long history of setting styles for product design and striking a balance between style and usability.

The advent of smart phones and tablets has resulted in millions of user-friendly apps being made available to the consumer market. As a result, there’s a lot of interest in work applications that are just as easy to use. Software for the IT management market is an area where applying modern product design principles can yield significant productivity and value for the companies using these products.

So what are the design principles that you should watch for as the next generation of IT management tools arrive? From a functional perspective, products need to help you centrally command your infrastructure, manage remote and widely distributed environments with ease, and automate everything. To deliver against these key functions, IT management products need to evolve based on the following four design principles:

Mobile First.

All aspects of the product should be designed so they can be used from a tablet or mobile phone – even if they will be used in a browser. By meeting this goal, it will be easy to deliver them within a web UI on a laptop or desktop. This is often described as Responsive Design. Basically, this means that what is available in the UI and how you interact with it will adapt to the form factor of the device you are using. If you have a laptop or tablet, you can expose more features. On a small device such as a mobile phone, navigation and other information is available, but not in your way. Another important aspect of a mobile first approach is to make sure that the apps have a native feel – so the iOS, Android and Windows apps should look and behave like they are native to the device.

Simplify everything.

You need to leverage powerful, policy driven automation, and be able to implement it simply. You don’t have the time to train your technical staff on highly complex products. Well-designed apps will take highly complex actions, but not expose this complexity to users so that they can be highly productive. For example, you should be able to quickly create policy and apply it reliably and at scale, with just a few clicks. One great way to simplify things is to be consistent in the features provided. For example, always include a Search driven approach to find things and take actions, and have it work the same way in every context.

Use pre-defined content.

Apps should deliver out-of-the-box building blocks to make simplification real. Part of the evolution towards a simpler, easier IT management solution is using content to deliver value quickly. Delivering configuration in the form of pre-packaged settings is an excellent example. Apps can include policy and profile definitions so that you don’t have to construct them before you can start using them. This applies to other app content. Apps can include prepackaged dashboard templates, agent procedures and automation scripts, profiles, and reports to deliver high productivity. Intelligent default values are probably the simplest form of content, and apps can make implementation much simpler by providing recommended choices by default.

Provide measurable impact.

You need apps that capture and present metrics demonstrating the positive impact of management apps on your business as part of the design. The whole reason for getting an IT management tool in the first place is to enable your business. It only makes sense that the app should provide the data to demonstrate value too.

By applying these principles, Kaseya is now building a new generation of IT management cloud apps that are really easy to use and maximize productivity, efficiency and quality for you. Our new Enterprise Mobile Management (EMM) app will reflect these principles in its beta release at the end of October. Kaseya customers can sign up to participate in the beta here. And this will be followed by reimagined apps for software delivery, patching, antivirus and antimalware. So stay tuned, we’ll provide you more specifics on these solutions in future blogs.

Author: Don LeClair

Building the World’s Fastest Remote Desktop Management – Part 4

Fastest Remote Control

Building the world’s fastest remote desktop management solution is a bit like building a high performance car. The first things to worry about are how fast does it go from zero to 60 and how well does it perform on the road. Once these are ensured, designers can then add the bells and whistles which make the high end experience complete.

In our first three installments in this series (Part 1, Part 2 and Part 3), we talked about the remote management technology being used to deliver speed and performance, and now we are ready to talk about remote management bells and whistles to deliver the high end experience IT administrators’ need. Kaseya Remote Control R8, which became available on September 30, adds 6 new enhancements to ensure greater security and compliance and help IT administrators resolve issues more quickly on both servers and workstations:

  1. Private Remote Control sessions:

    In many industries, such as healthcare, finance, retail, education, etc., security during a remote control session is crucial. Administrators cannot risk having the person next to the server or workstation view sensitive information on the remote screen. Kaseya Remote Control R8 allows IT administrators to establish private Remote Control sessions for Windows so that administrators can work on servers or workstations securely and discreetly.

  2. Track and report on Remote Control sessions:

    These same industries have strict compliance requirements. Remote Control R8 allows IT organizations to track and report on Remote Control sessions by admin, by machine, per month, week, day, etc., with a history of access to meet compliance requirements.

  3. Shadow end user terminal server sessions:

    Many users run terminal server sessions for which they may need assistance. Remote Control R8 lets IT administrators shadow end user terminal server sessions to more easily identify and resolve user issues.

  4. See session latency stats:

    Poor performance is often hard to diagnose. Remote Control R8 shows session latency stats during the remote control session so administrators are aware of the connection strength and can determine it’s relevance to an end user’s issues.

  5. Support for Windows Display Scaling:

    HiDPI displays are quickly becoming the norm for new devices. Remote Control R8 includes support for these display types (i.e. Retina) to allow IT administrators to remotely view the latest, high definition displays.

  6. Hardware acceleration:

    Remote management becomes much easier if one can clearly see the remote machine’s screen. Remote Control R8 enables hardware acceleration, leveraging the video card for image processing, for a sharper remote window picture while reducing the CPU overhead by 25%-50% depending on the admin’s computer hardware – “sharper” image screenshot.

Just like your favorite high-performance car, Kaseya Remote Control R8 is delivering the speed, performance and features IT administers need to obtain a high-end management experience.

Let Us Know What You Think

The new Desktop Remote Control became available with VSA R8 on September 30.

We’re looking forward to receiving feedback on the new capabilities. To learn more about Kaseya and our plans please take a look at our roadmap to see what we have in store for future releases of our products.

Author: Tom Hayes

Why is it called “Multi-Factor” Authentication? (MFA)

MFA Steps

Why is multi-factor authentication (MFA) “multi-factor” anyways? A simple enough question, right? Well, it’s not as simple as it sounds.

Depending on where you look, you can see references to two-factor authentication, three-factor authentication, strong authentication, advanced authentication. Based on the name, it sounds like these are all just subcategories of multi-factor authentication. Unfortunately, that’s only half true, and that’s also where this question gets complicated.

Which types of authentication are always examples of multi-factor authentication?

Two and three factor authentication are always examples of multi-factor authentication. Multi-factor definition, by definition, is authentication using at least 2 of the 3 possible authentication factors. So yes, two-factor and three-factor authentication are both examples of multi-factor authentication.

What about “strong” and “advanced” authentication?

This is where it gets tricky. Both strong and advanced authentication in use can be considered multi-factor authentication; however, it depends on how the authentication is implemented. To understand what I mean we first need to define what multi-factor authentication is.

What is multi-factor authentication?

The term “authentication” refers to the ability to verify the identity of a person attempting to access a system (presumably someone who is authorized to access that system). The term “factor” then, necessarily refers to the different types of tests someone must successfully complete to identify themselves. For IT security, these factors often filter down into three broad categories:

  • Knowledge: Something you know.

    This is the factor upon which password-only systems rely. To pass a knowledge factor based test, you must prove that you know a secret combination, like a password, PIN, or pattern.

  • Possession: Something you have.

    To authenticate using this factor, you must prove you possess something that only you should have, like a key, or an ID card.

  • Inherence: Something you are.

    Inherence means something that is inherently yours. That usually means a unique physical or behavioral characteristic, tested through some sort of biometric system.

Multi-factor authentication requires a system use at least two of these authentication factors to authenticate users. That’s why it’s “multi-factor” authentication.

Wait… so what was that about “strong” and “advanced” authentication?

Well, multi-factor authentication requires at least two factors be used. Both advanced and strong authentication can use two or three factors; however, the requirements do not require the use of “tests” from different categories. Strong authentication could be achieved by using a password and a security question, while advanced authentication could established with a password and a challenge question. This means that, while all multi-factor authentication solutions count as strong or advanced authentication, not all strong and advanced authentication solutions count as multi-factor authentication.

Why do businesses need multi-factor authentication?

Many groups feel that single-factor authentication is adequate for their needs, but let’s consider something first. You have a bank account, and tied to that bank account you likely have both a debit and a credit card. To access your money you already use multi-factor authentication. You have a debit/credit card (possession), and a pin code/ password (knowledge). Now, consider how much the damage a breach could cost your business. Does your business’ network deserve the same level of protection as your personal bank account, if not more?

Yes, yes it does.

Many industries already require multi-factor authentication! If you work in law enforcement in the United States, then you’re likely required to be CJIS compliant. CJIS compliance requires advanced authentication. If you work in retail, you’re likely PCI compliant. Again, PCI compliance requires multi-factor authentication. If you work in healthcare, then there’s HIPAA to consider. HIPAA is yet another regulation that requires multi-factor authentication. What this demonstrates is that, for IT security, MFA is becoming mainstream.

What’s my recommendation for a multi-factor authentication solution?

Well, no solution should be a one size fits all response. You should be able to customize and tailor any potential solution so that vital resources are protected, without inconveniencing users who don’t require multi-factor authentication. If you’re interested a solution designed from the ground up with security and usability in mind, then I’d recommend “AuthAnvil Two Factor Auth”.

AuthAnvil Two Factor Auth is a multifactor authentication server capable of adding identity assurance protection to the servers and desktops you need to interact with on a regular basis, and deep integration into many of the tools that you may use day to day. It also works with pretty much anything that supports RADIUS, so along with your Windows logon it can protect things like your VPNs, firewalls and Unix environments. Conveniently enough, it also integrates smoothly with Kaseya. That way you can accomplish even more from that single pane of glass.

For more information on multi-factor authentication: Click Here

For a look at how much AuthAnvil’s Kaseya integration can be used: Click Here

Author: Harrison Depner

Remotely Manage and Control Your Ever-Widening IT Environment

Big Bang Theory
According to the “Big Bang” cosmological theory, and the latest calculations from NASA, the universe is expanding at a rate of 46.2 miles (plus or minus 1.3 miles) per second per megaparsec (a megaparsec is roughly 3 million light-years). If those numbers are a little too much to contemplate, rest assured that’s really, really fast. And it’s getting faster all the time.

Does this sound a bit like the world of IT management? So maybe IT environments aren’t expanding at 46.2 miles per second per megaparsec, but with cloud services, mobile devices, and increasingly distributed IT environments, it feels like it. Things that need to be managed are further away and in motion, which means that the ability to manage remotely is crucial. IT operations teams must be able to connect to anything, anywhere to perform a full range of management tasks.

The list of things that need to be managed remotely continues to grow. Cloud services, mobile devices, new things (as in the “Internet of Things”) all need to be managed and controlled. To maximize effectiveness, remote management of this comprehensive set should be done within a single command center. Beyond central command, several core functions are needed to successfully manage remotely:

Discovery: Virtually every management vendor offers discovery, but all discovery is not created equal. The discovery capability must be able to reach every device, no matter where it is located – office, home, on-the-road, anywhere in the world. It must also be an in-depth discovery, providing the device details needed for proper management.

Audit and Inventory: It is important to know all the applications that are running on servers, clients, and mobile devices. Are they at the right version level? And for corporately controlled devices (that is, not BYOD devices), are the applications allowed at all? Enforcing standards helps reduce trouble tickets generated when problems are caused by untested/unauthorized applications. A strong auditing and inventory capability informs the operations team so the correct information can be shared and the right actions taken.

Deploy, Patch and Manage Applications: Software deployment, patch, and application management for all clients is key to ensuring users have access to the applications they need, with a consistent and positive experience. With the significant growth in mobility, the capability to remotely manage mobile devices to ensure secure access to chosen business applications, with either company-owned devices or BYOD devices, is also important. Arguably mobile devices are more promiscuous in their use of unsecure networks in coffee shops and airports etc., so it’s even more important to keep up with mobile device patch management to ensure security fixes are put in place as soon as possible.

Security: Protecting the outer layer network, the endpoints, is an important component to a complete security solution. Endpoint protection starts with a strong endpoint security and malware detection and prevention engine. Combine security with patch management to automatically keep servers, workstations and remote computers up-to-date with the latest, important security patches and updates.

Monitor: Remote monitoring of servers, workstations, remote computers, Windows Event Logs, and applications is critical to security, network performance and the overall operations of the organization. Proactive, user defined monitoring with instant notification of problems or changes — when critical servers go down, when users alter their configuration or when a possible security threat occurs — is key to keeping systems working well and the organization running efficiently.

Remote Control: IT professionals frequently need directand rapid access to servers, workstations and mobile devices securely and without impacting the productivity of the users in order to quickly remediate issues . Remote control capability must deliver a complete, fast and secure remote access and control solution even behind firewalls or from machines at home. Because seconds matter, remote control should provide near instantaneous connect times with excellent reliability, even over high latency networks.

Automation: As the world of IT becomes more and more complex, it is impossible to achieve required service levels and efficiency when repetitive, manual, error-prone tasks are still part of the process. IT managers need to automate everything that can be automated! From software deployment and patch management to client provisioning and clean-up, automation must be an integral part of a remote management solution.

Choosing management tools with strong remote management capabilities is important to achieving customer satisfaction goals, and doing more with the existing IT operations staff. Learn more about how Kaseya technology can help you remotely manage your increasingly complex IT services. Read our whitepaper, Managing the Complexity of Today’s Hybrid IT Environments.

Author: Tom Hayes

Using IT Management Tools to Deliver Advanced Managed Services

IT Management Tools

The managed service marketplace continues to change rapidly, particularly as more and more businesses are adopting cloud-based services of one type or another. The cloud trend is impacting managed service providers in numerous ways:

  • Cloud migration increases the potential for consulting projects to manage the transition but reduces the number of in-house servers/devices that need to be monitored/managed, impacting retainer-based revenues.
  • Service decisions are more heavily influenced by line-of-business concerns as customers shift spending from capital to operating expenses.
  • Service providers need be able to monitor and manage application-related service level agreements, not just infrastructure components.
  • Managing cloud-based applications is less about internal cloud resources and their configuration (which are typically controlled by the cloud provider) and more about access, resource utilization and application performance.

To address these challenges and be able to compete in a marketplace where traditional device-based monitoring services are becoming commoditized, MSPs must create and sell more advanced services to meet the changing needs of their customer base. The benefits of delivering higher value services include greater marketplace differentiation, bigger and more profitable deals, and greater customer satisfaction.

Advanced services require the ability to deploy a proactive service level monitoring system that can monitor all aspects of a customer’s hybrid cloud environment including the core internal network infrastructure, virtualized servers, storage and both internal and cloud-based applications and resources. Predictive monitoring services help ensure uptime and increased service levels via proactive and reliable notification of potential issues. Advanced remedial actions should include not only support for rapid response when the circumstances warrant but also for regular customer reviews to discuss longer term configuration changes, additional resource requirements (e.g. storage), policy changes and so on, based on predictive SLA compliance trend reports.

Beyond advanced monitoring, professional skill sets are also very important particularly when it comes to managing new cloud-based applications services. To be able to scale to support larger customers requires tools that can help simplify the complexity of potential issues and speed mean time to resolution. If every complex issue requires the skills of several experts – server, database, network, application etc., it will be hard to scale your organization as your business grows. Having tools that can quickly identify root-causes across network and application layers is vital.

Cloud customers are increasingly relying on their managed service providers to “fix” problems with their cloud services, whether it be performance issues, printing issues, integration issues or whatever. Getting a rapid response from a cloud service provider who has “thousands of other customers who are working just fine” is hard to do, especially as they know problems are as likely caused by customer infrastructure issues as by theirs. Learning tools help MSPs capture and share experiences and turn them into repeatable processes as you address each new customer issue.

Automation is another must have for advance service providers. Again, scalability depends on being able to do as much as possible with as few resources as possible. For infrastructure management, automation can help with service and application monitoring as well as network configuration management. Monitoring with application-aware solutions is an attractive proposition for specific applications. For the rest, it helps to be able to establish performance analytics against key performance indicators and diagnostic monitoring of end-user transaction experiences. For network management, being able to quickly compare network configurations and revert back to earlier working versions is one of the fastest ways to improve mean time to repair. Automating patch management and policy management for desktops and servers also results in significant manpower savings.

Finally, tools which help manage cloud service usage are also invaluable as customers adopt more and more cloud services. Just as on premise license management is an issue for larger organizations so cloud service management, particularly for services such as Office 365, is also an issue. Access to email accounts and collaborative applications such as SharePoint, is not only a security issue it’s also a cost and potentially a performance issue.

Developing advanced services requires a combination of the right skills, resources, processes and technology; in effect, a higher level of organizational maturity. Larger customers will tend to have developed greater levels of process and IT maturity themselves, in order to be able to manage their own growing IT environment. When they turn to a managed service provider for help they will be expecting at least an equivalent level of service provider maturity.

Having the right tools doesn’t guarantee success in the competitive MSP marketplace but can help MSPs create and differentiate more advanced services, demonstrate the ability to support customer-required SLAs, and scale to support larger and more profitable customers.

Learn more about how Kaseya technology can help you create advanced managed service. Read our whitepaper, Proactive Service Level Monitoring: A Must Have for Advanced MSPs.

What tools are you using to create advanced managed services?

Author: Ray Wright

Deliver Higher Service Quality with an IT Management Command Center

IT Mission Control

NASA’s Mission Control Center (MCC) manages very sophisticated technology, extreme remote connections, extensive automation, and intricate analytics – in other words, the most complex world imaginable. A staff of flight controllers at the Johnson Space Center, manages all aspects of aerospace vehicle flights.  Controllers and other support personnel monitor all missions using telemetry, and send commands to the vehicle using ground stations. Managed systems include attitude control and dynamics, power, propulsion, thermal, orbital operations and many other subsystems. They do an amazing job managing the complexity with their command and control center. Of course, we all know how it paid off with the rescue of Apollo 13. For sure all astronauts appreciate the high service quality MCC provides.

The description of what they do sounds a bit like the world of IT management. While maybe not quite operating at the extreme levels of NASA’s command center, today’s IT managers are working to deliver ever higher service quality amid an increasingly complex world. Managing cloud, mobility, and big data combined with the already complex world of virtualization, shared infrastructure and existing applications, makes meeting service level expectations challenging to say the least. And adding people is usually not an option; these challenges need to be overcome with existing IT staff.

As is true at NASA, a centralized IT command center, properly equipped, can be a game changer. Central command doesn’t mean that every piece of equipment and every IT person must be in one location, but it does mean that every IT manager have the tools, visibility and information they need to maximize service quality and achieve the highest level of efficiency.  To achieve this, here are a few key concepts that should be part of everyone’s central command approach:

Complete coverage: The IT command center needs to cover the new areas of cloud (including IaaS, PaaS, SaaS), mobility (including BYOD), and big data, while also managing the legacy infrastructure (compute, network, storage, and clients) and applications. Business services are now being delivered with a combination of these elements, and IT managers must see and manage it all.

True integration: IT managers must be able to elegantly move between the above coverage areas and service life-cycle functions, including discovery, provisioning, operations, security, automation, and optimization. A command center dashboard with easy access, combined with true SSO (Single Sign-On) and data level integration, allows IT managers to quickly resolve issues and make informed decisions.

Correlation and root cause: The ability to make sense of a large amount of alerts and management information is mandatory. IT managers need to know of any service degradation in real-time before a service outage occurs together with the root cause. Mission critical business services are most often based on IT services, so service uptime needs to meet the needs of the business.

Automation: As the world of IT becomes more and more complex, it is impossible to achieve required service levels and efficiency when repetitive, manual, error-prone tasks are still part of the process. IT managers need to automate everything that can be automated! From software deployment and patch management to client provisioning and clean-up, automation is the key to higher service quality and substantial improvement in IT efficiency.

In addition to these requirements for central command, cloud-based IT management is now a very real option. With the growing complexity of cloud, mobility, and big data, along with the ever increasing pace of change, building one’s own command center is becoming more challenging. IT management in the cloud may be the right choice, especially for MSPs and mid-sized enterprises which do not have the resources to continually maintain and update their own IT management environment. Letting the IT management cloud provider keep up with the complexity and changes, while the IT operations team focuses on delivering high service quality, can drive down TCO, improve efficiency, and increase service levels.

Author: Tom Hayes

Building the World’s Fastest Remote Desktop Management – Part 3

ICE (Interactive Connectivity Establishment)

In previous installments of this series, we went over some key technologies used for the new Kaseya Remote Control solution, and some of the features that make it so fast.

But possibly the most important part of getting a fast and reliable Remote Control session is the network connectivity used under the hood. In this post, we cover the types of connectivity used for Kaseya Remote Control, the advantages of each, and how we combine them for additional benefit.

P2P Connectivity

Peer-to-peer connectivity is the preferred method of networking between the viewer application and the agent. It generally offers high throughput and low latency – and because the viewer and agent are connected directly, it places no additional load on the VSA server.

Kaseya Remote Control uses an industry standard protocol called ICE (Interactive Connectivity Establishment) to establish P2P connectivity. ICE is designed to test a wide variety of connection options to find the best one available for the current network environment. This includes TCP and UDP, IPv4 and IPv6, and special network interfaces such as VPN and Teredo.

In addition, ICE takes care of firewall traversal and NAT hole punching. To achieve this, it makes use of the fact that most firewalls and NATs allow reverse connectivity on ports that have been used for outbound connections. This ensures no additional firewall configuration is required to support the new Remote Control solution.

ICE will select the best available P2P connection based on the type of connectivity, how long each connection takes to establish, and a variety of other factors. In practice, this means you will usually get TCP connectivity on local networks, UDP connectivity when crossing network boundaries, and VPN connectivity when no other options are available.

However, testing a wide variety of connectivity options can take several seconds – and in some network environments, it may not be possible to get a P2P connection on any network interface. This brings us to…

Relayed Connectivity

As an alternative to P2P connectivity, Kaseya Remote Control also uses connections relayed through the VSA. Relayed connections are quick to establish and unlikely to be affected by firewalls or NAT devices. They also tend to be more stable over long periods of time, especially relative to P2P connections over UDP.

In practical terms, a relayed connection is made up of outbound TCP connections from the viewer and agent to the VSA, where they are linked up for bidirectional traffic forwarding.

To minimize the network impact, relayed connections from the agent use the same port on the VSA as the agent does for checkins. This means that anytime an agent can check in, it will also be able to establish a relay connection. Conversely, on the viewer side, relayed connections will use the same port on the VSA as the browser uses to view the VSA website: If one works, so will the other.

Combining P2P & Relayed Connectivity

It’s clear that P2P and relayed connectivity both have their distinct advantages, so it would be a shame to settle for just one or the other. To get the best of both worlds, the new Kaseya Remote Control uses both types of connectivity in parallel. In particular:

  • When a new Remote Control session starts, we immediately attempt to establish both types of connectivity.
  • As soon as we get a connection of any type, the session starts. Typically, relayed connectivity will be established first, so we’ll start with that. This results in very quick connection times.
  • With the session now underway, we continue to look for better connectivity options. In most cases, a P2P connection will become available within a few seconds.
  • When a P2P connection is established, the session will immediately switch over from relayed to P2P connectivity. This is totally seamless to the user, and causes no interruption to video streaming or mouse and keyboard events.
  • Even if a P2P connection is established, the relayed connection is maintained for the duration of the session. So if P2P connectivity drops off for any reason, Kaseya Remote Control will seamlessly switch back to the relayed connection, while attempting to establish a new P2P connection in the background.

The upshot of all this is that you can have have fast connection times, high throughput, low latency, and a robust Remote Control connection, all at the same time: No compromises required!

Let Us Know What You Think

The new Desktop Remote Control will be available with VSA 7.0 on May 31. We’re looking forward to getting it into customer hands, and receiving feedback. To learn more about Kaseya and our plans please take a look at our roadmap to see what we have in store for future releases of our products.

Users Are Going Mobile And So Should Your IT Management

mobile device

Circa 2003: I needed to print sensitive corporate data for a client meeting the next morning. Those files were stored on a remote corporate server. I logged on to a company desktop, connected to the corporate LAN and printed the files. The next morning, I realized I got the wrong versions. I hurried back to the office, logged on to the desktop and printed the correct files. I barely made it in time for the client meeting and forgot my glasses in the cab as I reviewed the content on the way. Around this time, BlackBerry launched their first smartphones capable of email and web surfing. By the end of 2003, mobile Internet users were virtually nonexistent.

Circa June 2007: Enter Steve Jobs with the iPhone, which completely redefined smartphones as pocket-sized mini-computers. By the end of 2007, there were 400 million mobile internet users globally.

Today (2014): Got a smartphone…check. Got a tablet…check. Got iOS and Android devices…check and check. Setup office applications on them…done. Need to look up corporate files? Wait a minute…and done! Today, there are more than 1.5 billion mobile internet users in the world and very soon they will surpass the internet users on desktops.

Since 2007: the adoption of internet-capable smartphones has been stupendous. Almost every corporate employee today owns a smartphone for personal and/or office use. Mobile access to corporate information boosts business productivity (except when you are busy checking Facebook). This in turn helps increase job satisfaction of employees and keeps the company agile and responsive to business needs on the go. This is the essence of workforce mobility. But, in order to be future-proof, let’s not misunderstand mobility as the mere use of mobile phones to access data. The definition of “mobile” in this context should entail any computing device, capable of wireless data communication, that moves around with you (i.e. smartphone, tablet, smart watch, or google glass — if that ever takes off). And, who knows we may have the “smart paper” coming up soon.

This proliferation of mobile devices, in volume and in platform diversity, increases the challenges for IT management. The higher the number of endpoints that access enterprise data, the greater is the exposure to security risks.

The rapid adoption of mobile devices drives two important trends for the IT management staff, namely:

Continue Reading…

Building The World’s Fastest Remote Desktop Management – Part 2

Kaseya Remote Control

In his earlier blog post, Chad Gniffke outlined some of the key technologies underpinning the new Kaseya Remote Control solution in VSA release 7.0. This includes using modern video codecs to efficiently transmit screen data.

Going beyond these items, the engineering team at Kaseya has looked at every aspect of the remote desktop management workflow to shave precious seconds off the time required to establish a session.

In this post, we review three changes that have a big impact on both the time to connect and the experience once connected.

Keeping it Lean

When it comes to performance, what’s sometimes more important than what you do is what you don’t do. In the new Kaseya Remote Control, we have applied this principle in several areas.

When first connecting to a new agent, downloading remote desktop management binaries to the agent will represent a substantial portion of the connect time. With the new Kaseya Remote Control in VSA 7.0, this delay has been completely eliminated: Everything needed for Remote Control is now included with the agent install itself, and is always available for immediate use.

Likewise, the time to schedule and run an agent procedure against an agent has traditionally accounted for a large portion of the time to connect. Rather than attempt to optimize this, the new Remote Control doesn’t run an agent procedure at all. Instead, it maintains a persistent connection to the VSA server over a dedicated network connection that’s always on, and always available to start Remote Control immediately.

Making it Parallel

Establishing a Remote Control session involves a surprising number of individual steps. In broad strokes, we need to:

  • Launch the viewer application.
  • Establish a connection from the viewer to the VSA server.
  • Perform encryption handshakes to ensure each connection is secure.
  • Send Remote Control session details to the agent.
  • Wait for the user to accept the Remote Control session (if required by policy).
  • Establish relayed connectivity.
  • Collect network candidates for P2P connectivity.
  • Transmit P2P connection candidates over the network.
  • Perform P2P connectivity tests.
  • Select the best available network connection to start the session on.

But it turns out most of these steps can be performed in parallel – at least to some degree. For example, the information required to start a P2P connection to an agent can be collected while establishing an encrypted connection to the VSA. If user acceptance is required, a complete P2P connection can usually be negotiated long before the user approves the session. This dramatically reduces the overall time required to establish each session.

Utilizing the Hardware

Once connected to a remote agent, Kaseya Remote Control will start streaming full screen video data over the network connection, and drawing it to the viewer’s screen. The video codec under the hood ensures that a minimal amount of data is sent over the network, especially if nothing much is changing on screen. But on the viewer side, we still need to render the entire remote image to screen, at up to 20 frames per second. This can result in increased CPU load and battery drain on the viewer machine.

To reduce the impact on the viewer side, the new Kaseya Remote Control in VSA 7.0 now uses graphics hardware to scale and render the remote video content to screen. Modern graphics cards can perform these operations very efficiently, resulting in a reduced drain on system resources. This will be especially obvious when maintaining long-running connections to multiple remote agents.

Diving Deeper

These items represent a handful of the many changes going into our new Kaseya Remote Control to speed up connect times, and improve the experience once connected.

To find out more, stop by the Product Lab at Kaseya Connect in Las Vegas next week! And watch this space for a future post about the brand new P2P connection establishment technology that forms the backbone of our next generation Kaseya Remote Control.

Page 2 of 3«123»
-->