Archive - Enterprise Monitoring RSS Feed

Monitoring and Managing Devices for Your Distributed Workforce

airport-working

The rise of cloud computing, the move to Software as a Service (SaaS), and the continuous increase in the abilities of tablets and smartphones, has created a paradigm shift in the way IT does its business. It used to be IT could lock down the infrastructure, put up a firewall, and you could control employee access, monitor compliance, and maintain security.

Today, you not only need to manage laptops and devices used by a highly distributed workforce, you also need the capability to do your job away from the office by managing remotely. Let’s look at the challenges to properly monitoring and managing your mobile workforce (sometimes called off network monitoring), and how to overcome those challenges successfully and efficiently.

The Challenges of Off Network Monitoring

In today’s global and cloud environment, leaving the office and network doesn’t mean leaving behind corporate data─your job doesn’t end when your workforce leaves the office. Nor does your leaving the office mean your work is done. You are still responsible for the security and monitoring of all your systems.
Continue Reading…

How Endpoint Management (aka Remote Management) Helps Support Enterprise Growth

remote-endpoint-managementIf you are considering growing your business, you are not alone. However, have you analyzed what that growth might mean to your IT operations? For example, when you grow your business through mergers and acquisitions, how do you integrate it all so everything runs smoothly? How do you manage security measures over a widening network? How do you effectively troubleshoot when your infrastructure may be spread over multiple locations? How do you make sure any regulatory compliance is met throughout your growth?

Most importantly, how do you do all this with limited resources while trying to keep costs in check?  You need to make endpoint management a part of any growth strategy. IT managers in all industries face a similar struggle, but let’s look at the banking and credit union industry in particular.

Continue Reading…

IAM Profitable: Get Your Piece of the IAM Market

IAM is Profitable

If you’re an MSP or an IT service provider, then you’re involved in a business model that’s always looking to improve its offerings and increase its bottom line. With the global IAM (Identity and Access Management) market increasing at an explosive rate, being able to offer authentication and password management isn’t just a smart move, it’s also a safe move!
Continue Reading…

Why Your IT Monitoring System Must be Part of Your Security Defense

Security Monitoring

Whether you are a managed service provider using a remote monitoring and management (RMM) system to monitor client infrastructures, or an IT Operations group monitoring your company’s internal infrastructure, your IT management system is an important infrastructure component that needs to be secured. It’s also a key tool that you can use as part of your security apparatus to help protect the remaining infrastructure. Without strong security capabilities, your RMM system can easily become a tool for hackers and cyber criminals instead of serving its intended purpose.

PCI DSS Compliance

This is particularly important for businesses where industry security compliance is required. For retail and financial businesses, the Payment Card Industry Data Security Standards (PCI DSS) require that cardholder data be protected behind a firewall, yet the monitoring system, especially if it’s remote, is likely to operate through the firewall. Hackers gaining access to the system can have an immediate entry to the core of your infrastructure – or to your end devices such as POS terminals and self-service kiosks. Beyond direct access, remote management systems can obviously be used to change configurations and security settings on communications devices and firewalls, to download software (or malware) to end devices, and patch (or to indicate as patched) existing applications any or all of which can be used to open further vulnerabilities.

To further protect against communication with “untrusted networks” (the term used for any network not under direct control), the PCI DSS standards also require the securing of infrastructure information, the maintenance of an accurate and up-to-date inventory of all components that are in scope for PCI DSS requirements, and the development and maintenance of standard configurations for those components, along with many other factors. Your RMM system is likely to be a significant help in meeting these expectations and in helping with ongoing audits. For example, policy management can be used to ensure configuration standards are maintained and that only approved applications are able to be run on protected end devices. It can also be used to periodically ensure that mobile laptop computers have encryption technology installed and enabled to protect health records from disclosure in the event of theft.

HIPAA Compliance

For IT professionals in the healthcare field, securing protected healthcare information (PHI) is a major issue. While HIPAA and its related regulations do not spell out how patient data should be protected, it goes beyond technical recommendations to legally mandate that it must be protected. Both healthcare organizations (HIPAA’s “covered entities”) and their business associates (organizations supplying healthcare-related services that require access to patient data) are subject to HIPAA regulations. From an IT perspective this certainly means that the IT Operations personnel of both covered entities and any business associate organizations must take every precaution to maintain security and patient privacy when managing electronic systems that contain or process PHI.

Perhaps more interesting is the case of MSPs who provide managed services to healthcare organizations. It can be argued that, by the letter of the law, they are not considered business associates for the purposes of HIPAA on the ground that they do not require access to patient data to do their work. However, in practice, it’s unlikely that a healthcare provider would contract for their managed services without the requisite guarantees of security and data protection. Certainly it’s been a common Kaseya experience that when raising the need for strong security capabilities and processes, MSPs who service healthcare clients have immediately recognized the need.

So in either case, whether you are an internal or an external IT service provider, you should be taking all necessary steps to secure your monitoring capabilities and to use them, appropriately, to ensure the security of the systems you monitor and manage. And it’s our belief that MSP’s seeking healthcare clients will find that strong security capabilities and processes are the price of entry into that market.

Beyond securing their technology, those providing IT services must also ensure that their own policies and procedures support their (internal or external) customer needs. The use of strong passwords, single sign-on, multi-factor authentication, cyclical password updates, regular threat assessments, defined device configurations, test-before-going-live reviews, frequent security education etc., should be documented and adhered to requirements for all systems and personnel.

Kaseya is the leader in cloud-based remote monitoring and management and offers a comprehensive monitoring solution used by MSPs and SMBs worldwide. To find out more about what you can accomplish from a single pane of glass and how your monitoring solution can help protect your infrastructure click here.

To find out how best to control access to your secure assets and applications and how you can log who can access what, then click here.

If you’re looking for even more ways to improve the efficiency of your IT staff, why not take a look at a system which offers innumerable utilities from a single pane of glass.

Are you using your IT monitoring systems to enhance the security of your IT infrastructure?

Author: Ray Wright

MSPs and Big Data: Why and How

MSPs & Big Data – Why & How

Vikas Aggarwal

CEO, Zyrion Inc.

Always regarded as a non-critical part of day-to-day operations in the past, Big Data and its delayed analysis was relegated to batch processing tools and monthly meetings. Today, as the IT industry has snowballed into a fast moving avalanche of Cloud, virtualization, out-sourcing and distributed computing, the science of extracting meaningful intelligent metrics from Big Data has become an important and real-time component of IT Operations.

WHY BIG DATA IN CLOUD PERFORMANCE TOOLS

No longer do IT management systems work in vertical or horizontal isolation as just a few years ago. The inter-dependence between IT Services, applications, servers, cloud services and network infrastructure has a direct and measurable impact on Business Services. The amount of data generated by these components is huge and the rate at which this data is generated is so fast that traditional tools cannot keep up with any kind of real time correlation. The combined volume of data generated by this hybrid infrastructure can be huge, but if it is correlated properly, it can give misson critical insight into:

  • the response times and behavior of an IT service or application
  • the cause of performance degradation of an IT service
  • trend analysis and proactive capacity planning
  • see if SLAs are being met for business services

This data has to be analyzed and processed in real-time in order to provide proactive responses and alerting for service degradation. The data that is being collected can be structured or unstructured, coming from a variety of systems which depend on each other to offer optimal performance, and has little to no obvious linkage or keys to one another (i.e. the data coming from an application is completely independent of the data coming from the network that it is running on). Some examples of data sources that need to be correlated are application logs, netflow, JMX, XML, SNMP, WMI, security logs, packet analysis, business service response times, weather, news, etc.

Managed Service Providers are moving to hybrid cloud environments themselves and offering services ranging from security, backup, VoIP, applications and compute resources. Also, enterprises are outsourcing more and more of their IT performance management, and they expect the MSP to handle any and all kinds of IT data. Managed Service Providers must adopt monitoring systems that are flexible and can handle Big Data efficiently. Such IT monitoring platforms will allow them to offer higher value added services to enterprise customers. Once they have such versatile Big Data systems in place, they can offer real-time responses to alarms and alerts and give meaningful business impact analysis to these enterprise customers.

Contextual analytics and presentation of data from multiple sources is invaluable to IT Operations in troubleshooting poor application performance and user satisfaction. As a simple example, a user response time application could send an alert that the response time of an application is too high. Application Performance Monitoring (APM) data could indicate that a database is responding slowly to queries because the buffers are starved and the number of transactions is abnormally high. Integrating with network netflow or packet data would allow immediate drill down to isolate which client IP address is the source of the high number of queries.

HOW TO HANDLE BIG DATA FOR CLOUD PERFORMANCE

Traditional monitoring or BI platforms are not designed to handle the volume and variety of data from this hybrid IT infrastructure. The management platforms need to be designed to correlate Big Data from the IT components in real-time and provide feedback to the operations team for proactive responses. As these monitoring systems evolve, their Big Data correlation components will become richer and more analytical and will position these MSPs for the IT environments of the future.

New generation MSP monitoring solutions that are scalable, have predictive analytics, multi-tenant and a granular security model are now available from a small number of vendors. Single use systems that are designed for just network data or just application data are trapped within the same boundaries that makes Big Data meaningless – by its very nature, Big Data systems needs to be able to handle a very wide variety of data sources to provide greater uptime from faster troubleshooting and lower OpEx from correlated analysis.


Vikas Aggarwal is CEO of Zyrion Inc.,
a leading provider of Cloud and Network Monitoring software for large enterprises and Managed Service Providers.
You can read more about Zyrion’s MSP monitoring solution and how it handles Big Data on their web site.

 

 

How Much Product Functionality Are You Really Using?

Most software products in the ITIL stack – monitoring, ticketing, etc., all perform their basic functionality equally well when compared to other products in their class. However, best of class products have a lot more functionality and features that is typically forgotten in the craziness and urgency of operational deployments. In most cases, close to 60% of the product features are paid for but unused.

One might argue that these features are not important or needed in the enterprise. Interestingly, most of these unused features are precisely the reason for selecting the product in the first place and served as differentiators for selecting one product over the other. Using these advanced features would in most cases give must better ROI to the customer as is normally identified during the selection process. Yet a bulk of these differentiating features go unused

Some of the key reasons for not being able to derive greater benefits from software:

  • Using these new features requires a change in Operational Processes – and this change is usually not been factored in during the deployment.
  • the selection team is different from the end users of the deployed products, and they have not been exposed to the key differentiating features in the product
  • The desire to implement the software quickly and reduce implementation risk
  • Lack of skills to use the advanced features in the product
  • A very simple and effective way to address these issues is by training and re-training. In most cases, the customer purchases training at the beginning of the engagement, but they fail to do advanced training after the product is deployed and running for a while. Enterprises should make it a point to set aside budget and time for reviewing the product deployment a few months after implementation, and ask the software vendors to demonstrate how their advanced feature set can help reduce expenses and deliver faster ROI. This second stage of training would need to focus on how the product can interact better with existing OSS tools and improve processes.

    The deployment of any best of class product requires a two stage deployment – initially with the features which are the most common (and interestingly not the reason for selecting the product in the first place). The second stage (and most often overlooked) is much later after the initial deployment, and is when the best of class features are actually put to use.


    Vikas Aggarwal is CEO of Zyrion Inc.,
    a leading provider of Cloud and Network Monitoring software for large enterprises and Managed Service Providers.
    You can read more about Zyrion’s cloud monitoring solution here.

     

     

    Climbing Up the Stack-MSP Customers Demand Higher Value in Monitoring Services

    Managed Services is now a mature industry, and as it moves further down this maturity path, trying to differentiate or win customers based on lower prices per device monitored can only continue for a little while. As Managed Service Providers race to adopt automation and other workflows to reduce their operating expenses, the price differentiator can only narrow between different providers.

    As one would expect, after having squeezed the last drop of overhead from the opex, Service Providers need to start looking at offering higher value and differentiated services to their customers in order to stay ahead. When life began as a VAR many years ago, the focus was on verticals and getting familiar with an industry. Today, with the transformation to a service provider offering varied benefits to the old customers, MSPs need to be able to monitor the custom applications and services within the industry. If you look beyond the email and web services, every one of your customers has a unique IT application or service that is the core of their business – whether its medical billing or an online gaming or streaming video application. Today, most MSPs monitor the IT infrastructure and databases for these custom applications, but few have extended the services for monitoring these custom applications.

    Monitoring your customers ‘custom’ applications and services requires a higher value sale – not only does the MSP have to understand their customer’s business and applications that support this service, but they also will need to leverage their monitoring software’s APIs or custom monitoring abilities to monitor relevant metrics from these custom applications. Not only does this require a sales person who can explain the benefits of doing this to the customer, but also a technical person with some programming level skills to use the APIs.

    The benefits are obvious – the end customer now has an MSP who can monitor their IT services which directly impacts their business bottom line, and the MSP now has a stronger relationship with the end customer because of the high value provided. Providing some of these service oriented performance metrics on a rich dashboard also raises the visibility of the MSP within the end customer’s senior managers – something they probably were not able to do if only monitoring devices and applications instead of IT Services.

    We are already seeing this transition and demand for higher value amonst the MSPs as this industry moves further along the mature phase. Software technology vendors are beginning to adopt technologies such as ITIL BSM into their offerings, since these best practice technologies are essential in order to deliver the customized monitoring services to the end customers.


    Recession driven Innovations: Agility, Automation & Predictive Analytics

    Agility and velocity became the new success necessities of organizations in past few years, driven largely by recession realities and unpredictable business environments. Recession, viewed positively, boosted innovation and increased the pace of new process development and its adoption. The IT customer is increasingly global, the realm of the IT services grows larger every day and the sprawling, distributed IT components demand intelligent ways to manage and monitor this infrastructure.

    The administrative burden for managing this burgeoning infrastructure will only increase for IT departments, unless they adopt processes and software to automate most of the burden. To automate processes, you need to integrate the different workflows seamlessly, which requires the software products to have flexible APIs. The order entry, provisioning, monitoring & billing workflows are all candidates for integration and automation. There have been significant advances even within the cloud monitoring and management solutions to reduce the administrative burden with the use of templates, threshold baselining and creating of service models.

    The other innovation has been in the field of data analytics. The IT customer’s demands have always been dynamic, and IT departments have reacted by provisioning for the peak demand, resulting in wasted idle compute resources. Even the usage of application resources is dynamic by the hour and day of week, and it is increasingly important for IT departments to understand the behavior pattern of their network and applications in addition to the computing resources. The number of users, the response times, the queued messages, the database query rate – all vary by time of day and understanding the usage pattern and deviations from it helps isolate the root cause of IT service performance degradation much faster, and ultimatately, higher customer satisfaction. More importantly, using APM behavior patterns greatly reduces the amount of false alarms for IT Operations, and lower TCO.

    Automation and Analytics are smart product features focusing on reducing the administrative burden in todays distributed Cloud environments. Keeping business necessities in mind, these innovative features are pragmitacally relevant and a must for all IT departments in today’s business environment.

    APM –  The ‘A’ Dimension

    As more applications migrate to the evolving private and public cloud infrastructure, and permeate through the sprawling distributed “new” IT environment, the Gartner published “five dimensions” of Application Performance Monitoring become essential requirements for any APM monitoring platform. Being able to capture end-user experience, topology, deep dive monitoring of components and analytics will enable IT operations to isolate APM performance issues, reduce the MTTR for application services, and ultimately higher user satisfaction.

    With the arrival and rapid adoption of Virtualization and Cloud infrastructure, reduction in costs for ‘incremental units’ of computing power, the ability to more easily flex up and down as needed, and the lack of restrictions imposed by the traditional models, are all key drivers to migrate applications to this distributed cloud infrastructure. However, with these benefits comes the added burden of high administration overhead from managing a virally sprawling & dynamic IT environment.

    As the number of discrete virtual servers, components and resident applications explodes, performance monitoring and intelligent analytical needs to make rapid decisions will be critical for IT operations. Manually intensive legacy point monitoring tools will not be able to keep up in a dynamic and complex environment where applications can move in almost real time across the underlying IT infrastructure.

    Of course, the better approach would be to utilize and adopt the APM “A” dimension – Automation in the monitoring platform to reduce the burden from routine administrative tasks for application monitoring. Implementing the right systems and processes and finding a monitoring solution which uses a good degree of automation is essential to gain back the efficiency lost from the increased complexity of distributed application infrastructure. Automation in the area of monitoring will ensure consistency in performance monitoring and benchmarking, enabling IT Operations to make better and proactive decisions for application performance. As JP Garbani at Forrester said recently, gaining the right level of productivity in IT operations will come from using better tools, and specifically, automation.

    Cloud Monitoring Software: Automation and Intelligence Not Optional

    The quest to improve the productivity and efficiency of IT organizations is an ongoing one. A number of technologies and processes have been adopted over the decades to make IT operations leaner and more effective. With the arrival and rapid adoption of Virtualization technology and Cloud infrastructures in the past few years, IT organizations worldwide are starting to realize significant economy-of-scale benefits. Reduction in costs for ‘incremental units’ of computing power, the ability to more easily flex up and down as needed, and the lack of restrictions imposed by the traditional models, will all drive a dramatic increase in the consumption of computing and application resources as organizations will be freed up to do more. On the flip side, steps will need to be taken to deal with the resulting increase in the administration burden, else the efficiency gains realized from shared, flexible IT infrastructure will be outstripped by the high cost of managing a more dynamic and complex environment.

    Terms like “virtualization sprawl” have been coined to refer to the increase in the number of discrete virtual servers and related application components within the overall IT environment. This is no longer a hypothetical scenario, and organizations are already experiencing administration challenges because of the fundamental IT transformation driven by virtualization and cloud technologies. Consider the case of a leading educational institution in the Northeastern United States. Prior to embarking on an aggressive virtualization initiative, the operations team was responsible for ensuring the performance of approximately 1000 distinct physical servers. By the time the first phase of the server consolidation and virtualization initiative was completed, the team was tracking and managing the performance of over 7000 virtual servers!

    As the number of discrete virtual servers, components and resident applications explodes, the performance monitoring and root-cause-analysis demands on IT administrators will multiply exponentially. Manually intensive legacy and point monitoring tools will not be able to keep up, and organizations will face significant challenges in detecting and resolving issues in a timely manner. In one recent case of an organization being overwhelmed, the IT team resorted to forced daily ‘proactive reboots’ of a large number of their servers. The team claimed that this workaround was the only way to keep the infrastructure performing, given the absence of a comprehensive monitoring and management solution to identify real issues and isolate problem sources. The IT team acknowledged that the organization’s users and business operations were being impacted by this daily reset cycle, but viewed this approach as the lesser evil compared to blind, reactive fire-fighting!

    Off course, the better approach would be to take a more strategic stance and implement the right systems/processes to assure the performance of their IT infrastructure. Today’s cloud monitoring software solutions have to be capable of supporting automation of many of the routine administration tasks. More importantly, these systems need to have in-built intelligence to infer what his going on in the IT infrastructure and automate decision-making. The increased demands on the IT team will be partially offset by the automation capabilities of the monitoring solution, allowing IT personnel to focus on the deeper and more complex administration tasks. Furthermore, the overall efficiency and utilization of IT resources will be higher with the right capabilities in the IT monitoring software (see http://tiny.cc/cwytn to learn how).

    Page 2 of 3«123»
    -->