Archive - Enterprise IT RSS Feed

Simplifying IT Compliance for Credit Unions & Community Banks

compliance

As a CIO, IT manager, or MSP servicing financial institutions, you realize that compliance is an important priority. You need to be able to manage both internal compliance, ensuring employees follow operational policies to mitigate risk, and external compliance following rules and regulations established by outside entities such as the government.

Realizing the need for compliance is just half of the equation. Being able to effectively manage that compliance in an efficient and cost-effective manner is the second part. Let’s briefly look at some of the issues facing financial institutions concerning compliance, and then look at how you can achieve the second part without a big bank budget.

Compliance Worries of Financial Institutions

Compliance, auditing, and security issues keep financial professionals awake at night, and rightly so. Security lapses and failures in compliance can lead to huge fines, ruined reputations, and lost customer confidence. A CDW sponsored survey in 2015 asked senior executives at banks what concerns they had for their banks:
Continue Reading…

The Real Reason Your Workforce Is Not As Productive As It Should Be

healthy-computer

Chances are, in an average day, you are not accomplishing as many tasks as you would like… and neither are your colleagues or your employees. What is mystifying about that statement is that it seems today’s workforce is putting in more hours and more effort than ever before coinciding with an increased adoption of IT devices and applications designed to improve user productivity. In fact, this has been a key driver for organizations to enable workforce mobility – to provide flexibility in accessing business IT resources (applications, data, email, and other services) from any device at any location at any time in order to improve overall business performance. But even the most accomplished business professionals must admit there are days when little gets done despite herculean efforts.

Continue Reading…

Kaseya Connect: Come for the Conversations!

Register for Kaseya Connect!

Every year Kaseya holds its premier user conference, Kaseya Connect. If you are a Kaseya customer, you have probably seen the promotions, and are asking yourself, “Should I attend?” As you might expect, the Connect event features all of the right components:

Continue Reading…

Haste Prevents Waste. Single Sign-On Can Improve Any MSPs Profit Margin

Single Sign-On Efficiency

As people gain access to more online resources, they need to remember an ever-increasing number of usernames and passwords. Unfortunately, having more usernames and passwords means spending more time spent keeping track of those usernames and passwords.

If you’re a business owner and you don’t have password management software, then you’re letting your employees manage their passwords on their own. Your users could be setting the stage for every IT security manager’s worst nightmare: an office full of sticky notes with user names and passwords clearly visible around their workstations or cubicles. Without some form of password management solution, your employees are suffering from ongoing frustration as they try to manage their passwords while following your IT security requirements.

If your business is already using password management software, then you should have a solution that manages which resources your employees are able to access, and which credentials they should use to do so. Unfortunately, your password system may not be doing everything it can to provide simple, and secure access for your employees.

What if there was a way for users to have strong passwords without the need to remember them, while also retaining a high degree of security?

Regardless of how you’re managing your passwords today, you can eliminate password frustration, increase your employees’ efficiency, and improve your IT security by implementing a single sign-on password management solution.

What is Single Sign-On?

Single sign-on (SSO) is a system through which users can access multiple applications, websites, and accounts by logging in to a single web portal just once. After the user has logged into the portal, he or she can access those resources without needing to enter additional user names or passwords.

Single sign-on is made possible by a password management system that stores each user’s login ID and password for each resource. When a user navigates from a single sign-on portal to a site or application, the password management system typically provides the user’s login credentials behind the scenes. From the users’ perspective, they appear to be logged in automatically.

High quality SSO solutions are able to provide access to a variety of internal and external resources by utilizing standard protocols such as SAML, WS-Fed, and WS-Trust.

As with any password management application, security is a critical consideration for SSO systems. Single sign-on is often implemented in conjunction with some form multi-factor authentication (MFA) to ensure that only authorized users are able to log into the SSO web portal.

5 Reasons MSPs Benefit from Single Sign-On

  1. SSO can create exceptionally strong password security. When paired with multi-factor authentication (MFA), single sign-on gives you a password management solution that can be both user friendly and extremely secure.
  2. SSO makes enforcing password policies easier. In addition to allowing for strong passwords for critical resources, an SSO system makes it easier to assign and maintain those passwords. In some cases, you can take users out of the password management process entirely—a good SSO system will allow you to can assign them behind the scenes, and change them as needed when your security needs evolve.
  3. Users won’t need or want to save passwords to their unsecure browser. To the average end user, the ability of a web browser like Chrome to remember and submit passwords is a huge bonus; however, while saved passwords offer some of the benefits of single sign-on, web browsers offer none of the security that comes with a true password management solution. When you implement an SSO system, you eliminate the temptation for employees to save their passwords in their browsers, because the SSO portal does that job instead, and often does it better. At that point you could remove that feature from their browsers without the risk of angering your users.
  4. Single sign-on makes your systems easier to secure. Rather than securing dozens or even hundreds of access points to your systems, your security administrators can focus the majority of their efforts on securing just one—the SSO system. If you pair the SSO system with multi-factor authentication, you’re your credentials will be more secure and manageable, than a collection of independently secured websites and systems.
  5. Reduced IT help desk calls. Experts estimate that the average employee calls the IT help desk for password assistance about four times per year. Given that an average IT helpdesk call takes about 20 minutes, that’s 80 minutes per year. That’s 160 minutes of wasted time (IT staff + end user) per year per end user. A good SSO solution will help you put that money back on your bottom line, and free your IT professionals to spend their time on more important projects.

Now, before you go looking for a single sign-on system for your business, let me throw one more factor into the mix. If you’re reading this blog at blog.kaseya.com, there’s a good chance that you’re likely a Kaseya customer. If you are, or you’re interested in becoming one, make sure that the solution you choose supports a Kaseya integration. Scorpion Software was acquired by Kaseya not long ago, and they offer a full Kaseya integration of their user authentication and password management suite. Their suite offers single sign-on, multi-factor authentication, and many other features. So, if you’re looking for a Kaseya-optimized suite, there’s no better place to start. That way you can accomplish even more from a single pane of glass.

If you want more information on what a good single sign-on system should do: Click Here

If you want to know what I would recommend as a single sign-on solution: Click Here

Author: Harrison Depner

Remotely Manage and Control Your Ever-Widening IT Environment

Big Bang Theory
According to the “Big Bang” cosmological theory, and the latest calculations from NASA, the universe is expanding at a rate of 46.2 miles (plus or minus 1.3 miles) per second per megaparsec (a megaparsec is roughly 3 million light-years). If those numbers are a little too much to contemplate, rest assured that’s really, really fast. And it’s getting faster all the time.

Does this sound a bit like the world of IT management? So maybe IT environments aren’t expanding at 46.2 miles per second per megaparsec, but with cloud services, mobile devices, and increasingly distributed IT environments, it feels like it. Things that need to be managed are further away and in motion, which means that the ability to manage remotely is crucial. IT operations teams must be able to connect to anything, anywhere to perform a full range of management tasks.

The list of things that need to be managed remotely continues to grow. Cloud services, mobile devices, new things (as in the “Internet of Things”) all need to be managed and controlled. To maximize effectiveness, remote management of this comprehensive set should be done within a single command center. Beyond central command, several core functions are needed to successfully manage remotely:

Discovery: Virtually every management vendor offers discovery, but all discovery is not created equal. The discovery capability must be able to reach every device, no matter where it is located – office, home, on-the-road, anywhere in the world. It must also be an in-depth discovery, providing the device details needed for proper management.

Audit and Inventory: It is important to know all the applications that are running on servers, clients, and mobile devices. Are they at the right version level? And for corporately controlled devices (that is, not BYOD devices), are the applications allowed at all? Enforcing standards helps reduce trouble tickets generated when problems are caused by untested/unauthorized applications. A strong auditing and inventory capability informs the operations team so the correct information can be shared and the right actions taken.

Deploy, Patch and Manage Applications: Software deployment, patch, and application management for all clients is key to ensuring users have access to the applications they need, with a consistent and positive experience. With the significant growth in mobility, the capability to remotely manage mobile devices to ensure secure access to chosen business applications, with either company-owned devices or BYOD devices, is also important. Arguably mobile devices are more promiscuous in their use of unsecure networks in coffee shops and airports etc., so it’s even more important to keep up with mobile device patch management to ensure security fixes are put in place as soon as possible.

Security: Protecting the outer layer network, the endpoints, is an important component to a complete security solution. Endpoint protection starts with a strong endpoint security and malware detection and prevention engine. Combine security with patch management to automatically keep servers, workstations and remote computers up-to-date with the latest, important security patches and updates.

Monitor: Remote monitoring of servers, workstations, remote computers, Windows Event Logs, and applications is critical to security, network performance and the overall operations of the organization. Proactive, user defined monitoring with instant notification of problems or changes — when critical servers go down, when users alter their configuration or when a possible security threat occurs — is key to keeping systems working well and the organization running efficiently.

Remote Control: IT professionals frequently need directand rapid access to servers, workstations and mobile devices securely and without impacting the productivity of the users in order to quickly remediate issues . Remote control capability must deliver a complete, fast and secure remote access and control solution even behind firewalls or from machines at home. Because seconds matter, remote control should provide near instantaneous connect times with excellent reliability, even over high latency networks.

Automation: As the world of IT becomes more and more complex, it is impossible to achieve required service levels and efficiency when repetitive, manual, error-prone tasks are still part of the process. IT managers need to automate everything that can be automated! From software deployment and patch management to client provisioning and clean-up, automation must be an integral part of a remote management solution.

Choosing management tools with strong remote management capabilities is important to achieving customer satisfaction goals, and doing more with the existing IT operations staff. Learn more about how Kaseya technology can help you remotely manage your increasingly complex IT services. Read our whitepaper, Managing the Complexity of Today’s Hybrid IT Environments.

Author: Tom Hayes

MSP Best Practice: 4 Keys to Automation

Creating Automation

The benefits of automation were lauded as far back as 1908 when Henry Ford created the assembly line to manufacture his famous “any color you like as long as it’s black” Model T. Before assembly lines were introduced, cars were built by skilled teams of engineers, carpenters and upholsterers who worked to build vehicles individually. Yes, these vehicles were “hand crafted” but the time needed and the resultant costs were both high. Ford’s assembly line stood this traditional paradigm on its end. Instead of a team of people going to each car, cars now came to a series of specialized workers. Each worker would repeat a set number of tasks over and over again, becoming increasingly proficient, reducing both production time and cost. By implementing and refining the process, Ford was able to reduce the assembly time by over 50% and reduce the price of the Model T from $825 to $575 in just four years.

Fast forward a hundred years (or so) and think about the way your support capabilities work now. Does your MSP operation function like the teams of pre-assembly line car manufacturers or have you implemented automated processes? Some service providers and many in-house IT services groups still function like the early car manufacturers. The remediation process kicks off when an order (trouble ticket) arrives. Depending on the size (severity) of the order one or more “engineers” are allocated to solving the problem. Simple issues may be dealt with by individual support staff but more complex issues – typically those relating to poor system performance or security vs. device failures – can require the skills of several people – specialists in VMware, networking, databases, applications etc. Unfortunately, unlike the hand-crafted car manufactures who sold to wealthy customers, MSPs can’t charge more for more complex issues. Typically you receive a fixed monthly fee based on the number of devices or seats you support.

So how can you “bring the car to the worker” rather than vice-versa? Automation for sure, but how does it work? What are the key steps you need to take?

  1. Be proactive – the first and most important step is to be proactive. Like Ford with Model T manufacturing, you already know what it takes to keep a customer’s IT infrastructure running. If you didn’t you wouldn’t be in the MSP business. Use that knowledge to plan out all the proactive actions that need to take place in order to prevent problems from occurring in the first place. A simple example is patch management. Is it automated? If not, as the population of supported devices grows it’s going to take you longer and longer to complete each update. The days immediately after a patch is released are often the most crucial. If the release eliminates a security vulnerability the patch announcement can alert hackers to the fact and spur them to attack systems before the patch gets installed. If that happens, now there’s much more to do to eliminate the malware and clean up whatever mess it caused. Automating patch management saves time and gets patches installed as quickly as possible.
  1. Standardize – develop a check list of technology standards that you can apply to every similar device and throughout each customer’s infrastructure. Standards such as common anti-virus and back-up processes; common lists of recommended standard applications and utilities; recommended amounts of memory and standard configurations, particularly of network devices. By developing standards you’ll take a lot of guess work out of trouble-shooting. You’ll know if something is incorrectly configured or if a rogue application is running. And by automating the set-up of new users, for example, you can ensure that they at least start out meeting the desired standards. You can even establish an automated process to audit the status of each device and report back when compliance is contravened. The benefit to your customers is fewer problems and faster time to problem resolution. Don’t set your standards so tightly that you can meet customers’ needs but do set their expectations during the sales process so that they know why you have standards and how they help you deliver better services.
  1. Policy management – beyond standards are policies. These are most likely concerned with the governance of IT usage. Policy covers areas such as access security, password refresh, allowable downloads, application usage, who can action updates etc. Ensuring that users comply with the policies required by your customers and implied by your standards is another way to reduce the number of trouble tickets that get generated. Downloading unauthorized applications or even unveted updates to authorized applications can expose systems to “bloatware”. At best this puts a drain on system resources and can impact productivity, storage capacity and performance. At worst, users may be inadvertently downloading malware, with all of its repercussions. Setting up proactive policy management can prevent the unwanted actions from the outset. Use policy management to continuously check compliance.
  1. Continuously review – even when you have completed the prior three steps there is still much more that can be done. Being proactive will have made a significant impact on the number of trouble tickets being generated. But they will never get to zero – the IT world is just far too complex. However, by reviewing the tickets you can discover further areas where automation may help. Are there particular applications that cause problems, particular configurations, particular user habits etc.? By continuously reviewing and adjusting your standards, policy management and automation scripts you will be able to further decrease the workload on your professional staff and more more easily be able to “bring the car (problem)” to the right specialist.

As Henry Ford knew, automation is a powerful tool that will help you to reduce the number of trouble tickets generated and, more importantly the number of staff needed to deal with them. By reducing the volume and narrowing the scope, together with the right management tools, you’ll be able to free up staff time to help improve drive new business, improve customer satisfaction and ultimately increase your sales. By 1914 – 6 years after he started – Henry had an estimated 48% of the US automobile market!

What tools are you using to manage your IT services?

Author: Ray Wright

Three Key Monitoring Capabilities for VMware Virtualized Servers

VMware Virtualized ServersThe percentage of servers which are virtualized continues to grow, but management visibility continues to be a challenge. In this blog post we look at the three key monitoring capabilities – full metal, datastore, and performance – to give you the visibility and control you need to keep your virtualized applications performing well.

Before we start, below is a description of the information models which are important to hypervisor management:

Common Information Model

Common Information Model or CIM is an open standard that defines management and monitoring of devices and elements of devices in a datacenter.

VMWare infrastructure API

The VI API is a proprietary implementation of CIM provided by VMWare for management and monitoring of components related to the VMWare hypervisor.

Full metal monitoring

Fan status

The fan is essential for proper server function. When rack density goes up, server volume shrinks and fans need to work at higher speeds, which mean more wear and tear. A broken fan in a server can quickly cause major heat build up that affects the server and possibly neighbouring servers. The good thing is that it’s relatively easy to monitor the state of the fans. The CIM_fan class exposes a property called HealthState that contains information about the health of a fan: OK, degraded state, or failed.

PSU health

Power supply health is important to monitor. Most enterprise servers can be configured to have redundant power supplies. In addition, its good to have a spare in backup. OMC_Powersupply is a class the exposes the “HealthState” property for each PSU in your server. Just like the fan health, the PSU is determined to be OK, degraded, or failed.

Power usage

VI API can be used to measure average power usage, which gives an indication of the server utility cost. More power usage means more heat, which equals even more utility costs in the form of heat dissipation.The VI API counter power.power.average results looks like this:

VMware Virtualized Servers

Raid controller, storage volume and battery backup

Three key storage elements that you should monitor are the raid controller, storage volumes and the battery. The controller and disks seems obvious, but the battery? In many cases a high performance raid controller will have a battery to backup the onboard memory incase of a power outage. The memory on the controller is most commonly used for write back caching and when the server loses power, the battery ensures that the cache remains consistent until you restore power to the server and its content can be written to disk.

Datastore monitoring

Utilization, IOPs and latency are metrics that should be monitored and analyzed together. When you have performance problems in a disk subsystem, an “ok” latency can tell you to go and look for problems with IOPs. High utilization can tell you why you may not get the expected IOPs out of the system and so on.

Utilization

The utilization can be calculated using the capacity and freespace properties of the DatastoreSummary object.

IOPs

IO operations per second can be monitored using a VI API datastore.datastoreIops.average counter; which provides an average of read and write io operations.

Latency

Latency can be measured using datastore.totalWriteLatency.average and datastore.totalReadLatency.average counters. They will show you average read and write latency for the whole chain, which includes both kernel and device latency.

Performance monitoring

CPU

Threads scheduled to run on a CPU can either be in two states: waiting or ready. Both of these states can tell a story about resource shortage. The lesser evil of the two is the wait state, which indicates that the thread is waiting for an IO operation to complete. This can be as simple as waiting for a answer on a host external resource, or waiting on disk time. The more serious state is the so called “ready state” which indicate that the thread is ready to run, but there is no CPU free to server it.

VMware Virtualized Servers 3

Memory ballooning and IOPS

Memory ballooning is a process that can happen when a host experiences a low memory condition and probes the virtual machines for memory to free up. The balloon driver in each VM tries to allocate as much memory as possible within the VM (up to 65% of the available VM memory), and the host will free this memory to add to the host memory pool.

The memory ballooning counter, mem.vmmemctl.average, can show when this happens. So how can memory ballooning make a dent in your IO graph you may ask? After the host reconciles memory from VMs, these VMs can start to use their own virtual memory and start page memory blocks to disk, which is why memory ballooning may proceed a higher than normal IO operation.

Memory swapping

Ballooning may happen even if there is no issue; its a strategy for the host to make sure there is free memory for any VM to consume. Host swapping however is always a sign of trouble. There are a number of counters that you want to monitor:

mem.swapin.average
mem.swapout.average
mem.swapinRate.average
mem.swapoutRate.average

These counters show, both in cumulative and in rate, how much memory is swapped in/out. Host memory swapping is double trouble. No only does it indicate that you have a low host memory situation, it is also going to affect IO performance.

Final words

Monitoring, reporting and notification on all these metrics can be a challenge. The good news for Kaseya customers is that you can implement the monitoring described in this article using the new network monitor module in VSA 7.0, available now.

References:

Author: Robert Walker

IT Best Practices: Minimizing Mean Time to Resolution

Mean Time to Repair/Resolve

Driving IT efficiency is a topic that always makes it to the list of top issues for IT organizations and MSPs alike when budgeting or discussing strategy. In a recent post we talked about automation as a way to help reduce the number of trouble tickets and, in turn, to improve the effective use of your most expensive asset – your professional services staff. This post looks at the other side of the trouble ticket coin – how to minimize the mean time to resolution of problems when they do occur and trouble tickets are submitted.

The key is to reduce the variability in time spent resolving issues. Easier said than done? Mean Time to Repair/Resolve (MTTR) can be broken down into 4 main activities, as follows:

  • Awareness: identifying that there is an issue or problem
  • Root-cause: understanding the cause of the problem
  • Remediation: fixing the problem
  • Testing: that the problem has been resolved

Of these four components awareness, remediation, and testing tend to be the smaller activities and also the less variable ones.

The time taken to become aware of a problem depends primarily on the sophistication of the monitoring system(s). Comprehensive capabilities that monitor all aspects of the IT infrastructure and group infrastructure elements into services tend to be the most productive. Proactive service level monitoring (SLM) enables IT operations to view systems across traditional silos (e.g. network, server, applications) and to analyze the performance trends of the underlying service components. By developing trend analyses in this way, proactive SLM management can identify future issues before they occur. For example, when application traffic is expanding and bandwidth is becoming constrained or when server storage is reaching its limit. When unpredicted problems do occur, being able to quickly identify their severity, eliminate downstream alarms and determine business impact, are also important factors in helping to contain variability and deploy the correct resources for maximum impact.

Identifying the root cause is usually the biggest cause of MTTR variability and the one that has the highest cost associated with it. Once again the solution lays both with the tools you use and the processes you put in place. Often management tools are selected by each IT function to help with their specific tasks – the network group will have in-depth network monitoring capabilities, the database group database performance tools, and so on. These tools are generally not well integrated and lack visibility at a service level. Also correlation using disparate tools is often manpower intensive, requiring staff from each function to meet and to try to avoid the inevitable “finger-pointing”.

The service level view is important, not only because it provides visibility into business impact, but also because it represents a level of aggregation from which to start the root cause analysis. Many IT organizations start out by using open source free tools but soon realize there is a cost to “free” as their infrastructures grow in size and complexity. Tools that look at individual infrastructure aspects can be integrated but, without underlying logic, they have a hard time correlating events and reliably identifying root cause. Poor diagnostics can be as bad as no diagnostics in more complex environments. Investigating unnecessary down-stream alarms to make sure they are not separate issues is a significant waste of resources.

Consider the frequently cited cause of MTTR variability – poor application performance. In this case there is nothing specifically “broken” so it’s hard to diagnose with point tools. A unified dashboard that shows both application process metrics and network or packet level metrics provides a valuable diagnostic view. As a simple example, a response time application could send an alert that the response time of an application is too high. Application performance monitoring data might indicate that a database is responding slowly to queries because the buffers are starved and the number of transactions is abnormally high. Integrating with network netflow or packet data allows immediate drill down to isolate which client IP address is the source of the high number of queries. This level of integration speeds the root cause analysis and easily removes the finger-pointing so that the optimum remedial action can be quickly identified.

Once a problem has been identified the last two pieces of the MTTR equation can be satisfied. The times required for remediation and testing tend to be far less variable and can be shortened by defining clear processes and responsibilities. Automation can also play a key role. For example, a great many issues are caused by miss-configuration. Rolling back configurations to the last good state can be done automatically, quickly eliminating issues even while in-depth root-cause analysis continues. Automation can plays a vital role in testing too, by making sure that performance meets requirements and that service levels have returned to normal.

To maximize IT efficiency and effectiveness and help minimize mean time to resolution, IT management systems can no longer work in vertical or horizontal isolation. The inter-dependence between services, applications, servers, cloud services and network infrastructure mandate the adoption of comprehensive service-level management capabilities for companies with more complex IT service infrastructures. The amount of data generated by these various components is huge and the rate of generation is so fast that traditional point tools cannot integrate or keep up with any kind of real time correlation.

Learn more about how Kaseya technology can help you manage your increasingly complex IT services. Read our whitepaper, Managing the Complexity of Today’s Hybrid IT Environments

What tools are you using to manage your IT services?

Author: Ray Wright

Why IT Operations Needs a Comprehensive IT Management Cloud Solution

Performance-related issues are among the hardest IT problems to solve, period. When something is broken alarm bells sound (metaphorically in most cases) and alerts are sent to let IT ops know there’s an issue. But when performance slows for end-user applications there’s often no notification until someone calls to complain. Yet, in reality, losing 10 or 15 seconds every minute for an hour is at least as bad as an hour of downtime in an 8 hour day. At least as bad, and maybe worse – there’s the productivity hit but there’s also significant frustration. At least when the system is really down, users can turn their attention to other tasks while the fixes are installed.

IT Systems Management

One reason why this remains an ongoing issue for many IT organizations is that there are few management tools that provide an overall view of the entire IT infrastructure with the ability to correlate between all of its different components. It’s not that there aren’t individual tools to monitor and manage everything in the system, it’s that coordinating results from these different tools is time consuming and hard to do. There are lots of choices when it comes to server monitoring, desk-top monitoring, application monitoring, network monitoring, cloud monitoring etc., and there are suites of products that cover many of the bases. The challenge is that in most cases these management system components never get fully integrated to the point where the overall solution can find the problem and quickly identify root-cause.

If IT was a static science it’s a good bet that this problem would have been solved a long time ago. But as we know, IT is a hot bed for innovation. New services, capabilities and approaches are released regularly and the immense variety of infrastructure components supporting today’s IT environments make it difficult for monitoring and management vendors to keep up. New management tools appear very frequently too, but the cost and effort of addressing existing infrastructures is often cost-prohibitive for start-ups trying to get their new management capabilities to market quickly.

The complexity and pace of change lead some IT organizations to rely on open source technologies and “freeware” with the benefit that capital costs or operational expenses are kept to a minimum. Yet the results of using such tools are often less than satisfactory. While users can benefit greatly from the community of developers, it’s often hard to get a comprehensive product without buying a commercially supported version. Another issue for open source IT management solutions is that they’re generally not architected to support a large and increasingly complex IT infrastructure – at least in a way that makes it possible to quickly perform sophisticated root-cause analyses. The result is that while the tools may be inexpensive, the time and resources needed to use them can be much greater and their impact less than satisfactory.

IT management is its own “big data” problem.

As IT infrastructure continues to become ever more complex, IT management is becoming its own big data problem. Querying an individual device or server to check status and performance may retrieve only a relatively small amount of data to be sent to the management or monitoring system; a small volume of data but likely a diverse set of information indicating the status of numerous parameters and configuration items. Polling mobile devices and desk-tops, servers, applications, cloud services, hypervisors, routers, switches, firewalls ….generates a whole lot of data, each different item having its unique set of parameters and configurations to retrieve. Polling hundreds, thousands or even tens of thousands of devices every few minutes (so that the management system will be current with device status) can create significant network traffic volume that must be supported without impacting business applications. On top of that the volume of data, the polling frequency, and the resultant velocity of traffic must be accommodated to support storage, trend analysis and real-time processing. System management information is usually stored for months or even years, in order that historical trends and analyses can be performed. But most importantly the management system needs to rapidly process information in real-time to correlate cause and effect, disable downstream alert and alarm conditions and perform predictive analysis so that valid messages can be proactively sent to alert IT ops. Now system management architecture becomes important. Add to that the need for flexibility to accommodate the ever changing IT landscape and management system design to support this “big data application” becomes a critical issue.

This, in part, is why IT management vendors are migrating their solutions to the cloud. As IT infrastructures continue to expand in terms of size, complexity and diversity the ability to support the volumes of data generated and the performance needed to undertake comprehensive root-cause analysis becomes more and more challenging for purely on-premise solutions. In addition, cloud solutions offer the promise of “best practice” advice that can only be derived from a shared-service model, with the caveat that security and privacy remain paramount.

Of course, cloud solutions, with their pay-as-you-use pricing and (on premise) infrastructure free installations are also becoming far more attractive than perpetual licensing arrangements. However, the bottom line is cloud-architected solutions are extremely extensible and able to more quickly and easily accommodate new functionality to the benefit of all users. Not the least of which is the ability to deploy better diagnostic tools and capabilities to support the needs of today’s diverse IT user communities for high levels of systems availability AND performance.

Author: Ray Wright

Mobile Malware Proliferates – How to Protect Your Information

phone-shield

As the lines between corporate and personal mobile devices continue to blur, with more and more people choosing to use a single mobile device for both, not only is personal data at risk, but corporate data could be too.

Recent IBM research into the top tactics behind today’s cyber attacks, shows that mobile malware is becoming more prevalent, especially on Android devices. As Android continues to gain in popularity – IDC research reveals that Android has nearly 80% of global smartphone market share – cyber criminals are focusing their attention on these popular devices.

The IBM report states that, “As the number of users who own and operate Android devices is rapidly expanding, so too have malware authors increased their effort to take advantage of this larger market. Older mobile devices are even more vulnerable as only six percent of Android devices are running the latest version of the platform which has the security enhancements needed to combat these threats.”

If businesses adopt a Mobile Device Management (MDM) only approach to controlling corporate devices, they can’t guarantee the safety of their data. While MDM is important to ensure compliance and manageability of the device, MDM alone cannot guarantee the safety of data on a device if it’s compromised. Once compromised, an attacker can remove MDM controls or at the very least circumvent them so that their intrusion can carry on unnoticed.

If businesses focus on a Bring Your Own Device (BYOD) strategy, using containerization – keeping business apps and data encrypted and separate from the rest of the device – organizations are able to effectively secure corporate data that’s held on the device, providing protection of corporate data and applications. As mobile malware continues to become more and more prolific, this approach can help businesses stay one step ahead of cyber criminals when it comes to protecting business critical information on personal devices.

Make sure you continue to check out our website for our latest developments in these spaces and, as always, share your thoughts and comments with us below!

 

Page 4 of 5«12345»
-->