Blog

The views expressed in the posts and comments of this blog do not necessarily reflect those of Sigma Solutions. They should be understood as the personal opinions of the author. No information on this blog will be understood as official.


  • Data backup may not be very glamorous, but it is arguably the most critical technology within the data center. Organizations need foolproof solutions to protect mission-critical data and enable rapid recovery in the event of disaster or system failure.

    Yet backup continues to be a pain point for many organizations. In many organizations, backup technologies and processes have not kept pace with growing data volumes and increasing virtualization, making it difficult to complete backups within the available window. And should disaster strike, organizations may find that recovery is challenging and excruciatingly slow.

    A recent study by IDC found that many organizations are facing backup complexity within their heterogeneous environments. Almost 37 percent of organizations have to simultaneously back up virtual, physical and cloud-based servers. Of those that are managing virtual infrastructures, 54 percent have to manage two or more different hypervisors.

    These challenges are exacerbated by the fact that many organizations have multiple backup solutions within their environments. Point solutions for VM backup and de-duplication add to the complexity and create integration nightmares. As data volumes continue to mushroom and 24×7 availability becomes the rule, IT organizations are struggling to keep up. Legacy backup systems become roadblocks that impact IT’s ability to deliver services and meet increasingly stringent SLAs.

    Due to these headaches, backup responsibilities are being pushed outside of traditional backup administration roles and onto database administrators, virtualization managers and others within the IT organization. Such fragmented processes only add to the confusion and complexity.

    Organizations need a holistic data protection platform that can accommodate the wide range of workloads within today’s data center, including physical and virtual servers, networked storage and arrays, and big data. The backup solutions should deliver improved performance, reliability, manageability and scalability coupled with reduced complexity and lower total cost of ownership.

    There are a number of mature and emerging technologies that can help organizations overcome their backup challenges. We continue to see strong demand for purpose-built backup appliances that combine software, storage arrays, a server engine and de-duplication. The latest solutions can be tightly integrated with backup software, and deliver the performance and scale to support thousands of VMs.

    But data protection is as much strategy as it is technology. Organizations need to implement policies and processes around storage tiering and data archival to reduce the burden on primary storage and backup and recovery systems. Cloud-based and hybrid solutions, such as Disaster-Recovery-as-a-Service can also help to relieve bottlenecks and improve data protection.

    IDC predicts that storage will increase 50x over the next 10 years, but IT staffing is only expected to grow .5x over the same timeframe. Organizations needed to rethink their backup environments and implement a unified approach that simplifies management and delivers performance and scale. It may not be glamorous but it’s absolutely critical.

    Sigma’s engineering team has the know-how and experience to architect an end-to-end data protection strategy. We are helping organizations do more with less while maximizing the value of their existing IT investments.

  • Cloud computing can deliver reduced capital and operational costs, increased agility and simplified IT management, enabling IT personnel to focus on strategic initiatives rather than keeping the lights on. But security and data privacy risks remain obstacles to public cloud deployment, and many organizations are concerned about loss of control to a third party, cloud application performance and regulatory compliance issues.

    Private clouds help address these concerns while enabling organizations to leverage the technical benefits of cloud computing. Applications and data remain squarely behind the firewall, while the data center becomes more flexible and scalable. However, implementing a private cloud can be challenging and requires a higher investment than public cloud solutions.

    Enter the hybrid cloud.

    As the name suggests, a hybrid cloud enables organizations to have certain services and applications managed externally on a public cloud and others managed internally on a private cloud. This makes it possible to keep mission-critical applications and sensitive data close to the vest while leveraging the efficiency and flexibility of the public cloud for services such as data archival.

    A hybrid cloud isn’t the same as using public and private cloud services simultaneously. In a true hybrid cloud, the public and private clouds are integrated to allow IT to easily migrate workloads in order to optimize the environment. A single interface streamlines the flow of data between the public and private clouds and creates a consistent end-user experience.

    A hybrid cloud must be managed with as much rigor as a private cloud and traditional data center solutions. The key is to minimize the design differences between the public and private cloud environments so a centralized management strategy can be applied to the hybrid cloud as a whole with as few adjustments as possible.

    A hybrid cloud management strategy should cover:

    • Best practices for configuration, change control, patch management and implementation.
    • Security, including the encryption of data during transmission and at rest, access controls, firewalls and policy enforcement.
    • Device fault monitoring and performance alerts, which should be centrally managed.
    • Budget controls, including alerts for both unused resources and charges that exceed certain levels.
    • Capacity planning and provisioning for both the onsite data center and the public cloud.
    • Data classification to ensure that the most sensitive data remains in the private cloud.

    Network World has declared 2014 to be the year of hybrid cloud adoption, and Gartner predicts that half of mainstream enterprises will have a hybrid cloud by 2017. Nevertheless, a change in culture will likely be necessary for widespread hybrid cloud adoption. Successful implementation requires not only different skills and expertise but a different thought process than traditional IT infrastructure.

    As IT continues its transformation from technical asset to strategic business asset, cloud services must be evaluated for their ability to improve business processes and user experiences, not solve technical problems. A strategic approach is the key to taking full advantage of a hybrid cloud.

  • 55 percent of respondents to Computerworld’s 2014 IT Salary Survey said they communicate frequently or very frequently outside of business hours, including when they’re on vacation. According to the TEKsystems Stress & Pride survey, 41 percent of IT professionals said they’re expected to be available around the clock. 38 percent are accessible only during traditional business hours, and 21 percent fall somewhere in between 9 to 5 and 24×7.

    Clearly, today’s always-on business mentality, which requires always-available support, has led to an around-the-clock IT culture. This level of IT accessibility may be necessary if you expect to stay relevant and competitive, but delivering and maintaining 24/7 responsiveness is a tall order.

    First, the complexity of today’s IT environments makes it difficult to find, hire and train IT professionals who are capable of taking a call at 2 am and quickly solving the problem. Organizations that have workforces and customers dispersed across the globe must have the same level of support on Sunday at midnight that they have during regular business hours. This can quickly turn into a costly proposition.

    When IT is focused on fielding and responding to support requests at all hours, it becomes virtually impossible to escape the old 80/20 ratio. 80 percent of IT’s time is spent on routine maintenance, and only 20 percent is left over for innovation. Having 24×7 availability is largely wasted when so little time can be spent developing new services and solutions that create competitive advantages.

    Also, consider the heavy burden placed upon the collective shoulders of your IT department and how it impacts their productivity. This pressure results in a phenomenon called presenteeism, which occurs when an employee is physically present but not performing at optimal levels, usually due to stress, depression or exhaustion. Studies have shown that presenteeism negatively impacts productivity more than absenteeism.

    Many organizations are supplementing their in-house IT departments with outsourced managed services to meet the demands of the around-the-clock IT culture. By turning over day-to-day maintenance tasks, support and other responsibilities to a managed services provider, organizations can take advantage of a number of benefits.

    • Staffing relief. Instead of hiring and training additional staff to ensure 24×7 availability, let the managed services provider take on this responsibility. For a monthly fee, you’ll have access to a team of IT experts who are using the latest hardware and software.
    • More innovation. Outsourcing routine maintenance and support is the first step toward reversing the 80/20 ratio. Let your in-house IT department focus on strategic growth initiatives that improve business agility and set your organization apart from the competition.
    • Lower IT costs. It’s basic math. Utilizing managed services is much more cost-effective than adding staff, and it may enable you to streamline your IT infrastructure.
    • Greater IT job satisfaction. IT doesn’t want to spend all of its time keeping the lights on. They want a challenge. A managed services provider helps your IT staff make a difference without the burnout that results from being on call 24×7.
    • Additional perks. Depending on the managed services you choose, your organization could also benefit from improved security, increased storage performance and capacity, less unplanned downtime, and improved disaster recovery planning.
  • In Part 1 of this post, we introduced Cisco Application Centric Infrastructure (ACI), a transformational approach to IT that many industry experts claim is among the most disruptive data center innovations in a generation. Cisco ACI offers a breakthrough solution that meets the agility demands of the modern enterprise in today’s application-driven business environment.

    Cisco ACI speeds application deployment cycles from months to minutes, breaks down silos to create a single point of management for all administrators in both physical and virtual networks, and boasts an open ecosystem of partners working together to drive innovation and deliver maximum value to enterprises.

    The core technology upon which Cisco ACI is built is as impressive as the benefits it delivers. This technology includes:

    The Nexus 9000 Switch Family. Serving as the foundation of Cisco ACI, the new Nexus 9000 switches provide both modular and fixed 10/40/100 Gigabit Ethernet switch configurations. This allows enterprises to seamlessly transition from traditional NX-OS to the new ACI mode NX-OS, which leverages the ACI application’s policy-driven services and automation capabilities. Designed with both merchant silicon and custom ASICs from Cisco, Nexus 9000 switches provide improved performance, scalability, security, virtualization support, programmability, and power and cooling efficiency.

    Cisco Application Policy Infrastructure Controller (APIC). This new appliance is at the core of automation and management for the ACI fabric, bringing together physical, virtual and cloud infrastructure management in a common, open framework. This open architecture allows for the integration of third-party Layer 4 through 7 services, virtualization and management. Cisco APIC optimizes performance and provides centralized, system-level visibility and application-level control based upon defined application network profiles, which are used to expedite the provisioning of network resources.

    Cisco Application Virtual Switch (AVS). Specifically designed for Cisco ACI and managed by Cisco APIC, the Cisco AVS enables intelligent policy enforcement and optimal traffic steering while enhancing application visibility and performance.

    Cisco Adaptive Security Virtual Appliance (ASAv). This is the first transparently integrated, application-based security solution, providing consistent security across both physical and virtual environments.

    40G BiDi Optics. This innovation allows enterprises to avoid massive fiber overhauls as they move to 10/40G. 40G BiDi makes it possible to maintain existing 10G cables, resulting in significant labor and fiber cost savings.

    Cisco ACI is a direct response to the need for greater business agility, which can only be achieved through an application-centric, unified operational model. Let’s discuss how you can transform your data center, simplify IT management and unleash your applications quickly and efficiently with Cisco ACI.

  • A new study by Osterman Research found that the average small to midsize business (SMB) is using 14.3 cloud-based applications. By some estimates, workers are using 10 times more cloud apps than IT thinks — so if you’re aware of 30 cloud apps being used in your organization you’re probably looking at 300.

    Cloud-based applications are faster to deploy, simpler to use and have a lower upfront cost than traditional enterprise applications. As a result, many organizations are using cloud-based applications to become more adaptive to business conditions and responsive to customer demands. However, cloud adoption is in many ways impeding those goals.

    These tools by nature exist outside the IT infrastructure and, as a consequence, outside of the general flow of data among business processes. Unless organizations take steps to integrate cloud and enterprise applications, they wind up with application “silos” that impact productivity and limit the economic value of the software investment.

    Enterprise application integration (EAI) has its roots in the 1990s when many companies started buying packaged software solutions that automated specific business processes. These systems created silos of automation that produced redundant information and became problematic when common data changed — changes to data in one application would not necessarily be reflected in the other. Organizations began searching for ways to integrate these disparate systems in order to automate business processes that spanned them.

    EAI is still very relevant today. Information consumers are demanding that data be made available to them regardless of its structure or distribution across the enterprise. The cloud is simply increasing the importance and changing the nature of EAI.

    Cloud-based applications deliver rapid business benefits without the burden on IT to manage and maintain both the application and the underlying IT infrastructure. But not every application can move to the cloud, so most organizations end up with a hybrid environment that requires the integration of cloud-based apps with traditional applications and data sources. Whether it’s on-premises or in the cloud, an application has to support the organization’s business processes. As a result, application integration has become critical.

    Unfortunately, cloud integration has been hindered by a lack of complete integration tools. Traditional EAI solutions provide the functionality large enterprises need to integrate complex enterprise applications but lack the speed and simplicity that’s desirable for cloud deployments. Custom code offers a relatively low upfront cost but is time-consuming to develop and costly to maintain over the long term.

    The drawbacks of these solutions are driving strong interest in hybrid cloud solutions. In the hybrid cloud, organizations can tap both public and private cloud services via a single interface that streamlines the flow of data and creates a consistent user experience. Integrated Platform-as-a-Service (iPaaS) supports hybrid cloud application integration with tools that enable applications to communicate and share data sources.

    Sigma Solutions has developed strong relationships with leading cloud providers to complement the skill sets of our engineering team. We are helping organizations maximize the value of the cloud by developing strategies for integrating cloud services into the overall IT infrastructure.

     

  • We’ve all heard the statistic. 80 percent of the IT budget is used to keep the lights on in the data center. That’s not just the data center budget. That’s 80 percent of the entire IT budget. Only 20 percent goes to creating any business value from technology investments.

    There is a legitimate concern that the 80 percent figure could easily rise as data centers become denser and more complex. While hardware is being designed to reduce the data center footprint, this equipment still needs to support more users, more devices and more data.

    Simply put, organizations need to get a better handle on optimizing their data centers.

    This concern has led organizations to look more closely at data center infrastructure management (DCIM), a broadly used term that may mean different things to different people. In general terms, DCIM is the concept of managing the data center environment as a whole to ensure optimization and cost efficiency.

    DCIM was introduced as part of the green IT movement and the desire to control power and cooling costs. In fact, one Gartner analyst claims organizations can recoup the cost of DCIM tools in three years on power and cooling savings alone. Today, DCIM has been expanded to include asset management, capacity management and data center monitoring. While various tools are capable of handling some of these tasks, the goal of DCIM is to optimize data center cost and performance by centralizing management functions in one cohesive system.

    DCIM enables IT to assess the existing data center infrastructure and predict how changes or additions will impact the data center’s efficiency and performance. For example, a major concern today is capacity management. DCIM tools are capable of providing a virtual 3-D view of the data center, including hardware and cabling, as well as a dashboard view of capacity-related data. DCIM also can model how the placement of additional equipment will appear and assess how it will affect data center capacity.

    Although they can deliver significant business value, DCIM tools are extremely complicated. However, the growing popularity of DCIM solutions points to the driving need to optimize the data center spend. Even if DCIM is out of reach, organizations should be looking at ways to streamline their data center operations.

    Sigma One Source managed services were developed with the same goal as DCIM – to optimize the performance and efficiency of the data center. Our monitoring, management and support minimize downtime and unexpected expenses and relieve you of day-to-day administrative burdens. You can get a better handle on data center costs and operations, spend more time on business strategy and innovation, and worry less about maintenance. Contact us to learn more about how Sigma One Source managed services can help.

  • Applications drive business, from communication and collaboration to research and sales. This isn’t a trend to keep an eye on for the future. Instead, this has quickly become today’s business reality – so quickly, in fact, that IT managers are scrambling to keep up with this seismic shift to an application-centric business model.

    Complexity is the biggest obstacle inhibiting IT’s response to this trend. New applications, upgrades and migrations can take months to deploy, making it difficult for IT to bring new products and services to new markets while managing risk, maintaining security and compliance, and meeting efficiency demands. At the same time, IT is expected to manage more and more applications in less time with fewer resources.

    In order to stay competitive, businesses need data centers that enable greater agility without sacrificing security.

    Cisco has introduced a new and potentially game-changing model for IT – Application-Centric Infrastructure (ACI). The ACI model is focused on empowering employees and dramatically improving agility and productivity through real-time application delivery. Based upon industry standards, Cisco ACI is a revolutionary data center and cloud solution that provides total visibility and a single point of management in both physical and virtual networks.

    Cisco ACI responds to increasing demands for new applications by shrinking deployment cycles from months to minutes, thanks to innovations in software, hardware and systems, as well as a network policy model that is application-aware and leverages open APIs. By reducing the time required to provision, change or remove applications, Cisco ACI accelerates the pace of business, resulting in a 75 percent lower total cost of ownership compared to software-only network virtualization.

    Cisco ACI knocks down silos, providing every administrator, regardless of their area of focus, with an identical view of an organization’s entire infrastructure. By combining all network resources – networking, storage, compute, applications and security – into one cohesive unit, Cisco ACI makes it easier to configure, troubleshoot and change IT components while maximizing application performance.

    Cisco ACI is open technology, with an open ecosystem of partners that are collaborating to drive innovation, leverage existing IT investments and provide organizations with enhanced business agility. This diverse group of leading technology companies can use ACI’s open and extensible application policy model to ensure faster support of applications within the data center.

    In Part 2 of this post, I’ll dig deeper into the technology that powers Cisco Application-Centric Infrastructure and how this breakthrough model works.

  • Most organizations have come around to the fact that Big Data can be used to drive business strategy.  However, Big Data is primarily unstructured data that doesn’t fit into traditional database schemas, making it difficult to mine for value. As a result, vendors are working on technological solutions that enable organizations to search and query this data, extract the most important information, and gain the knowledge that can create competitive advantages.

    Analytics software has been developed that enables organizations to search and query unstructured data, but this software requires significant server processing power. As a result, multiple servers are harnessed in a massively parallel application.

    However, data must be transferred to the servers for processing, placing a heavy burden on network resources and creating a bottleneck that slows processing speeds. Studies have shown that data transfers account for more than half of the processing time in some instances. By relieving this bottleneck, the processing of Big Data can be accelerated and provide organizations with real-time analytics. This requires a network that can intelligently scale to meet the bandwidth demands of the data transfer.

    Today, however, network provisioning and management is largely done manually, creating complexity and operational overhead even with relatively stable application and infrastructure requirements. Such a network environment creates major headaches when you attempt to support the changing workloads associated with server and storage virtualization. Big Data further amplifies that pain.

    Software-defined networking (SDN) is being increasingly viewed as the approach best suited to support Big Data analytics. Because SDN decouples the control plane from the data plane, networks can be centrally programmed through a single controller to support Big Data demands. SDN enables IT organizations to create customizable, easily scalable and agile networks that enable servers to communicate efficiently to shorten wait times to speed Big Data processing.

    According to research from IBM and Rice University, such network-aware applications have been estimated to decrease the time needed to complete critical Big Data operations by 70 percent. A separate study from Infloblox revealed that an SDN-aware version of Hadoop lowered a key benchmark by 40 percent when performed over an SDN network.

    The performance gains are so significant that Big Data may become a catalyst for SDN adoption. Organizations are beginning to lean more heavily upon Big Data to provide value, guide strategic business initiatives and produce competitive advantages. SDN has the potential to meet the performance demands of Big Data applications, better utilize network resources and significantly reduce the amount of hardware required. This would make Big Data easier to digest and convert into revenue.

  • As organizations have struggled to upgrade their IT infrastructures to support bring-your-own-device (BYOD) initiatives, cloud-based services, virtualization and big data, technology and management have become complex and inefficient. Hardware-focused IT environments lack the flexibility and agility needed to meet the demands of the modern business landscape.

    Server virtualization has helped, but it can only go so far when the rest of the data center isn’t virtualized. Instead of adding to already complex networks in which silos and manual hardware management waste time and IT resources, organizations need a fresh approach – an approach that embraces automation, knocks down silos and shares resources in order to maximize efficiency and utilization.

    One such approach is the software-defined data center (SDDC). All elements of the SDDC environment, including networking, storage, compute and security, are virtualized, abstracted from hardware, pooled, and delivered as a service. Instead of manually configuring each individual piece of hardware, administrators use intelligent software on a single console to configure policies for the entire network.

    Essentially, you remove the software from the physical device and run it through a virtual device in the SDDC, creating a number of benefits:

    • Improved efficiency and agility. Resources are automatically provisioned and deployed and workloads are automatically balanced according to programmed policies. As market conditions change, new applications can be up and running – and providing real business value – in a matter of minutes.
    • Reduced costs. The sharing and automatic assigning of IT resources means these resources are better utilized, which can virtually eliminate wasteful IT spending while boosting productivity. Also, the SDDC uses commodity equipment that is less expensive and easier to maintain than proprietary hardware.
    • Less time spent on routine maintenance. Because the IT team doesn’t have to spend time manually configuring individual devices, they can shift their attention to strategic initiatives that drive revenue and create competitive advantages.
    • Greater flexibility. Organizations can utilize a public, private or hybrid cloud delivery model for the SDDC. And because SDDC software runs on commodity x86 servers, organizations can avoid being tied to a particular vendor’s equipment.

    Switching to a SDDC doesn’t happen overnight. More than a certain kind of technology, the SDDC is a completely new way of thinking about how the data center is built, and how IT services are managed and delivered. As a result, organizations need to determine whether they have the capacity to support migration to the SDDC. Because most IT architectures include technology from a number of vendors, a multiple-virtualization and multiple-cloud management platform will help to simplify management for administrators. Finally, IT needs to understand configuration management in order to shift from manual to automatic provisioning of resources.

  • Disaster can strike at any time and without warning, causing businesses to suffer downtime and data loss. The disruption to operations can be devastating. That’s one reason why 25 percent of businesses fail to reopen following a disaster, according to the Institute for Business and Home Safety.

    Despite the risk, few organizations have an effective disaster recovery (DR) platform. Traditional DR environments require organizations to duplicate their entire production infrastructure and associated operational processes in an offsite data center. Because of the significant investments and operational overhead involved, fast and reliable DR has remained out of reach for all but the largest organizations.

    Virtualization reduces the cost of setting up a DR site by minimizing the number of physical servers required for recovery and enabling data replication and failover across different types of equipment. However, it still requires organizations to purchase equipment and dedicate IT resources to maintain that equipment and manage the DR solution.

    The cloud is helping to relieve these challenges. Cloud-based DR-as-a-Service (DRaaS) solutions provide a robust DR platform in a subscription-based offering. DRaaS shifts the overhead associated with DR to a third-party service provider, eliminating the need to acquire data center space and purchase hardware or software.

    Expertise is another advantage of DRaaS. In addition to providing infrastructure, true DRaaS adds multiple layers of services, including DR planning, ongoing management and support. DR processes are handled by the service provider’s DR specialists, increasing confidence in the solution and allowing the customer’s IT resources to be redirected toward other initiatives.

    Because DRaaS capabilities vary widely, organizations should do due diligence in selecting a service provider. Key considerations include

    • Data center capabilities — Does the service provider’s data center have redundant power and communication links and adequate fire suppression?
    • Geographic location — Is the service provider’s data center located in an area where earthquakes, hurricanes, tornados and other natural disasters are unlikely to occur?
    • Remote access capabilities — Can personnel administer the site remotely if weather, pandemic or other circumstances prevent travel?
    • Regulatory compliance — Is the data center SSAE 16 certified? Does the service provider follow applicable operational standards?
    • Testing — Does the service provider conduct periodic testing of the DR plan?
    • End-to-end support — Does the service provider monitor data replication processes and provide 24×7 support?
    • Recovery SLAs — Does the service provider employ experienced personnel who can quickly activate and manage failover and failback processes?

    It is also important to select a service provider with data centers far enough away that a regional disaster won’t affect both the production and DR site, yet close enough for effective data replication.

    After selecting a service provider, organizations should use the migration to DRaaS to bring their DR plans into closer alignment with business objectives. Instead of focusing on protecting individual systems and data, IT teams can work with a knowledgeable DRaaS provider to develop an enterprise-level DR plan that considers critical dependencies within the environment.

    More and more organizations are taking advantage of these benefits — Reportstack forecasts that the DRaaS market will grow more than 54 percent between 2014 and 2018. DRaaS offers an alternative to traditional DR that is less complex, faster to implement and more affordable, making it one of the most compelling cloud-based services available.

  • The CIO traditionally has been viewed as the person in charge of figuring out how to use technology to reduce costs. The CIO was in charge of keeping the IT environment running, ensuring that users had computers and network access, and putting out every fire imaginable. The CIO often got all of the blame when things went wrong and none of the credit when things went right.

    Today, the CIO’s role is evolving. IT is now a critical component of business processes that build revenue and create competitive advantages. Modern technology has automated many of the administrative tasks that previously required direct involvement from the CIO’s team. As a result, the CIO is being asked to expand his or her contribution from keeping the lights on to driving innovation.

    This progression of IT from survivalist to strategic asset is measured through IT operational maturity. According to Gartner, IT Operational Maturity level is determined by assessing how effectively an IT organization has aligned process, technology, people and management. This assessment is used to create and implement an improvement roadmap, which consists of a series of initiatives that enable the organization to optimize business and IT operations and maximize the ROI from the IT infrastructure.

    Many organizations struggle with IT inefficiency , inconsistent availability, lack of functionality in business applications, questionable security, complex management, and end-user dissatisfaction. IT operational maturity initiatives help to overcome these challenges and optimize the IT environment. As a result, IT is able to:

    • reduce IT costs and risk
    • improve user and customer experiences
    • boost productivity
    • enhance change management
    • build strategic relationships with vendors
    • enable greater innovation and agility as market conditions change

    An IT Operational Maturity Assessment should begin with a clear understanding of what the business expects from IT. In other words, what is the role of IT in supporting business strategies and objectives? This will help shift the focus of IT from cost containment, maintenance and stability to innovation, business agility and improving the experience of the customers of your IT services. The assessment should have a unified approach, incorporating all components of an often complex IT infrastructure and strategy, with a goal of achieving clearly defined business objectives.

    In a future post, we’ll discuss the Gartner IT Operational Maturity Level model and how an IT Operational Maturity Assessment can help organizations make better strategic decisions and become more successful.

  • As the amount of data being produced and transferred across corporate networks continues to skyrocket, organizations are struggling to meet growing storage requirements. Instead of constantly adding local storage capacity, more and more organizations are turning to cloud-based storage as a cost-effective alternative.

    There are two basic cloud storage options that can be implemented by both small-to-midsize businesses and large enterprises – Storage-as-a-Service (SaaS) and Backup-as-a-Service (BaaS). SaaS enables organizations to store data remotely by utilizing the storage infrastructure of a cloud service provider. Similarly, BaaS allows the remote backup of data on cloud-based servers owned by a service provider. In both models, data is accessed via the Internet using an encrypted connection.

    Cloud storage can benefit organizations of all sizes in a number of ways:

    • Capital Preservation. Instead of purchasing and allocating space for hardware, you use the enterprise-grade infrastructure of the provider who is responsible for keeping the environment up-to-date.
    • Operational Efficiency. With cloud storage, you don’t have to maintain offsite backups, manage and support the storage infrastructure, or power and cool the hardware.
    • Simplicity. Cloud storage reduces your data center footprint and the complexity associated with enterprise storage equipment.
    • Scalability. Predicting how much storage capacity you’ll need for the next year and beyond can be very difficult. Many organizations overspend or fall short. With cloud storage, you pay as you go, adding or reducing capacity based upon current needs.
    • Mobility. Bring-your-own-device policies and mobile workforces make anytime, anywhere access to data a business imperative. Cloud storage enables users to access data from any desktop or mobile device with an Internet connection. This improves productivity, flexibility, collaboration and customer service.
    • Security. Service providers typically have more robust security systems in place and highly qualified IT personnel to manage those systems. Data is encrypted, backed up and secured on multiple servers, which speeds disaster recovery and minimizes the risk of equipment failure and security breaches.

    Before moving storage to the cloud, you need to assess the readiness of your IT infrastructure. In other words, you can’t just rip out your storage equipment, flip a switch, and start using cloud storage. You need to make sure your infrastructure can support cloud applications and provide reliable Internet connections so you can take full advantage of the cloud without compromising performance or reliability. You may want to keep certain business-critical data onsite instead of turning over control to the provider.

    You also need to assess the capabilities of your service provider. Ask where your data is physically located, who can access your data, and how long it will take to access your data. If your organization is subject to industry regulations, your provider should show you how compliance is maintained. Finally, make sure answers to all of these questions and the responsibilities of all parties are clearly defined in your service level agreement, which should be reviewed by an attorney.

    Cloud storage certainly has its advantages, but only when deployed strategically with careful, meticulous planning. Let Sigma Solutions help you determine if cloud storage makes sense for your organization and what infrastructure upgrades may be necessary for implementation.

  • The process of developing, testing, deploying and changing applications in-house is typically complicated and inefficient from an IT infrastructure perspective. Each application needs hardware, an operating system, middleware, servers and an assortment of software, along with a dedicated IT team to manage that infrastructure. In addition to being expensive to power and cool, this type of environment is difficult to scale and provides little agility to quickly adapt to changing business requirements.

    Platform-as-a-Service (PaaS) is a cloud-based delivery model that enables organizations to consume application infrastructure and services as a monthly operational cost. Instead of hosting the application development platform, the platform is delivered by a cloud service provider who is responsible for managing, updating and securing the infrastructure, and provisioning the servers, storage and backup needed when deploying an application. The provider may also assist with the development, testing and deployment of software.

    PaaS is similar to middleware, a software layer of tools for application developers. However, middleware must be configured and managed. PaaS makes it possible for developers to focus on creating applications without worrying about the backend infrastructure. In other words, PaaS offers middleware services while shifting the operational burden to the service provider.

    PaaS allows organizations to reduce capital and operational expenses, simplify their IT infrastructure and accelerate the process of launching new applications by as much as 50 percent. This allows more resources to be devoted to the development of custom applications that create competitive advantages and drive revenue. These are the key factors driving the increased adoption of PaaS as organizations seek to operate with more efficiency, speed, flexibility and agility. In fact, Gartner predicts that all organizations will leverage public or private PaaS solutions for at least a portion of their business software by 2016.

    Different vendors offer different PaaS services and features. Some even have slightly different definitions of the PaaS model. Like any IT solution, there is no one-size-fits-all approach, so your solution should be customized to suit your specific business needs. Make sure the solution you choose is easily scalable and capable of supporting enterprise-grade applications, and make sure your provider can keep your data secure and maintain regulatory compliance. It’s also helpful to use a non-proprietary, interoperable PaaS solution in order to avoid vendor lock-in and allow for portability across clouds.

    Sigma Solutions has partnered with industry-leading cloud providers to deliver best-of-breed PaaS solutions. Let Sigma help you determine how your organization might benefit from PaaS, assess your existing infrastructure, and customize a solution that helps you operate more efficiently and effectively.

     

  • Most organizations that use virtual desktops are hosting them onsite in their data centers. However, as cloud-based services and mobility continue to grow, Desktop-as-a-Service (DaaS) is becoming an increasingly popular delivery model. With DaaS, a cloud service provider hosts the virtual desktop infrastructure (VDI).

    DaaS and VDI both streamline desktop management and allow for greater flexibility and mobility. They also make it possible to shift from PCs to low-cost thin clients or zero clients in order to reduce hardware costs. The most obvious difference between DaaS and VDI, however, is that DaaS is hosted in the cloud and VDI is hosted in-house. Essentially, DaaS enables organizations to outsource VDI.

    With DaaS, organizations pay a monthly subscription fee to a service provider and avoid any capital expenses that are required to implement and host VDI onsite. While long-term costs of DaaS and VDI are likely comparable, VDI requires a robust backend infrastructure that can be complex to implement and operate. This makes DaaS more economically feasible for many organizations. On the other hand, DaaS customers must have ample bandwidth and reliable Internet connectivity to ensure optimal performance and minimize latency, two common sources of frustration when using cloud-based services.

    DaaS shifts responsibility for maintenance and costs related to storage, backup, security and upgrades to the service provider. This reduces network complexity and removes much of the day-to-day management tasks from your IT department, although IT must manage its virtual desktop applications and monitor remote desktop protocols. With VDI, all management, maintenance and provisioning is handled in-house. While this requires more IT resources, it also gives IT more control over data security and performance.

    DaaS is flexible, as cloud-hosted desktops can be quickly deployed on virtually any device, and you can scale services up or down according to current business needs. Licensing is an issue with DaaS, however; Microsoft has yet to offer a Windows 7 licensing agreement for service providers, although there are alternatives to Windows 7. VDI licensing isn’t much better, with Software Assurance and a variety of other licenses required.

    Organizations will obviously benefit from the lower upfront costs, simplified infrastructure and streamlined management with DaaS, but IT generally prefers to maintain direct control over security and sensitive data. As a result, many enterprises are choosing a hybrid approach to desktop virtualization, leveraging both onsite VDI and cloud-based DaaS.  It’s simply a matter of determining which approach makes the most sense for specific groups of users within the organization.

    Before moving to a DaaS model, make sure your service provider offers adequate security, connectivity, reliability and support, and provides compensation for outages in your service level agreement. Keep in mind that you can conduct pilot programs for DaaS, so take advantage of this capability in order to test the effectiveness of your DaaS solution and determine if it is the right approach

  • According to a TechRepublic survey, 45 percent of organizations are using virtual desktop infrastructure (VDI) for end-user computing. Research from Gartner predicts virtual desktops will expand to 70 million units in 2015 – a jump from a 2010 estimate of 40 million units in 2013 – and account for 40 percent of the market. VDI deployments are growing, but not as quickly as many have expected.

    While industry experts have been saying “this is the year for VDI” for the past few years, there are clear challenges with VDI implementation that have stopped that prediction from becoming reality. Compared to traditional desktop deployments, implementation of VDI requires significant expertise and can be very complex. Poorly planned and executed VDI deployments have led to poor user experiences and prevented organizations from taking advantage of the flexibility and simplified management that VDI is capable of delivering.

    In many instances, organizations must increase network and storage capacity in order to support VDI. Software licensing policies are still evolving and can be costly and complicated. From a strategic standpoint, organizations need to determine which employees will actually benefit from VDI and choose a scalable architecture that allows for simple VDI expansion. Simply put, implementation complexity has curbed some of the enthusiasm surrounding VDI.

    Converged infrastructure solutions promise to change that dynamic by simplifying VDI implementation. With a converged infrastructure, the entire IT environment – compute, networking, storage and virtualization resources – are delivered in one preconfigured, pretested solution.

    Converged infrastructure speeds VDI deployment by dramatically reducing complexity and streamlining the design of the data center architecture. Business applications are tested and benchmarked in advance to ensure high performance levels in mixed workloads, which minimizes risk and leads to more predictable, reliable results. Organizations can start small and scale the environment up or out by adding converged infrastructure components while maintaining the consistent performance needed for an optimized user experience,

    Today, vendors are offering converged infrastructure solutions that are designed and validated for desktop virtualization to help organizations hit the ground running and take full advantage of VDI. Many vendors are offering solutions that enable organizations to quickly respond to changing market conditions with pre-integrated platform options and template-based infrastructure and workload provisioning.

    All of these factors, along with better utilization of resources, fewer network components and fewer maintenance contracts, contribute to lower total cost of ownership when VDI is delivered via converged infrastructure. Integrated, centralized management further drives down costs and operational headaches.

    Will this be the year for VDI? We won’t start making those kinds of proclamations. But converged infrastructure may just be the game-changer that alleviates many of the concerns organizations have with VDI, leading to more widespread deployments.

  • First, the public cloud was all the rage as organizations enjoyed new levels of flexibility. As security and regulatory compliance concerns grew, the focus moved to the private cloud. Now, organizations seeking increased agility and efficiency are exploring the hybrid cloud.

    In a hybrid cloud environment, an organization seeks to maximize agility by using both public cloud services and an onsite private cloud. Instead of replacing the existing IT infrastructure, the cloud complements and enhances the corporate data center. This enables organizations to leverage the scalability and cost-efficiency of a public cloud, maintain control of mission-critical applications and data, and automatically provision resources according to current business needs.

    The number of hybrid cloud deployments remains relatively low, but they’re at the same level as private cloud deployments a few years ago, according to Gartner research. In fact, nearly half of large enterprises are expected to move to a hybrid cloud by the end of 2017.

    While a shift is underway to the hybrid cloud, the technology is still evolving and challenges remain. A hybrid cloud tends to be more complex than traditional environments, making it difficult to develop policies and ensure seamless operation between cloud services and in-house architecture. Compatibility issues can lead to frustrating, productivity-draining performance issues.

    According to a study conducted by Forrester Research last year, ensuring the performance of applications and maintaining visibility and control of workloads across public and private cloud services were significant challenges associated with the hybrid cloud. IT must effectively manage configuration, security, and the detection and resolution of network issues while minimizing impact to the production environment.

    There are cultural forces at work as well. Gartner suggests the largest obstacle to more widespread hybrid cloud deployments is resistance to the transformational adjustments necessary to make it work. IT must break away from the traditional IT culture, embrace a model centered on automation and self-service, and focus on solving strategic business process problems rather than technical issues.

    One way to overcome the challenges involved with hybrid cloud deployments is to partner with a managed services provider. A managed services provider can develop a strategy based upon your organization’s business processes and goals to ensure a cohesive hybrid cloud environment. And because cloud-based services, application and data are critical to your operations, around-the-clock network monitoring, support and mobile device management are necessary to maintain the highest levels of security and performance. Turning over these responsibilities to a managed services provider reduces costs and enables in-house IT resources to focus more on strategic initiatives and take full advantage of the agility made possible by a hybrid cloud.

    Technology is now viewed more as a driver of revenue and creator of competitive advantage than a collection of tools. As a result, organizations must reevaluate their approach to how technology is managed and integrated with business processes. By leveraging managed services in a hybrid cloud environment, IT can streamline operations while delivering the flexibility, scalability, reliability and performance the business demands.

     

  • Keeping documents up-to-date and ensuring that colleagues have the right version has always been difficult. The problem has only grown worse with the distributed nature of today’s enterprise and the increasing use of mobile devices. Cloud-based file-sharing services such as Dropbox, EverNote and YouSendIt have emerged to provide a simple (and, sometimes, free) solution.

    With cloud-based file-sharing, users can access documents anytime, anywhere from any Internet-connected device. It enables employees to easily share documents with individuals outside the company firewall, and is particularly useful for files that are frequently updated or too large to email.

    But organizations are justifiably concerned about the security threats associated with cloud-based file-sharing, including data loss, theft or regulatory compliance violations. According to the “Content in the Cloud” report by the Association for Information and Image Management, 45 percent of companies have official policies regulating the use of “consumer-grade” file-sharing and collaboration systems. Although few organizations ban them outright, IBM made news a couple of years ago when it prohibited its 400,000 employees from using these systems as well as other public cloud services.

    Not every organization has the same needs and requirements as IBM. But any business that stores sensitive information should be aware of the very real risks associated with cloud-based file-sharing. Security is not the only issue — organizations should also be concerned about losing control over valuable information assets. Consider these results from a recent survey conducted by Harris Interactive:

    • 51 percent of employees think that cloud-based file-sharing is secure.
    • 38 percent have transferred sensitive files via an unapproved file-sharing service to someone else at least once; 10 percent have done it six or more times.
    • 46 percent say that it would be easy to take sensitive business documents to another employer.
    • 27 percent of users of cloud-based file-share services report still having access to documents from a previous employer.

    Simply banning the use of cloud-based file-sharing isn’t the answer. Employees need to easily access and share files and will adopt tools that allow them to do that — with or without the approval of IT.

    The best way for organizations to circumvent the use of cloud-based file-sharing is to provide employees with an alternative. An enterprise-class file-sharing solution can provide the same convenience and flexibility as consumer-grade options while ensuring that IT retains the necessary control. Here are three of the many options available:

    • Citrix ShareFile offers best-in-class capabilities to users such as secure file sharing on any device, robust sync tools to manage data on multiple devices, and seamless Microsoft Outlook integration, while extending enterprise-grade security and control capabilities to organizations.
    • VMware Horizon Workspace provides a single workspace for desktops, applications and data as well as secure internal and external file-sharing.
    • Syncplicity by EMC enables one-click file-sharing and distribution of files to mobile users, and provides real-time document backup and continuous availability.

    When choosing an enterprise-class file-sharing solution, there are a number of things to consider:

    • Employee work styles and company culture. The file-sharing solution should enhance collaboration, streamline processes, and extend to customers, business partners and other third parties as appropriate.
    • Existing content and collaboration systems. The file-sharing solution may need to integrate with these tools to ensure smooth workflows as well as security, privacy and regulatory compliance.
    • Deployment, administration and management. Because file-sharing solutions are deployed to most if not all employees, administration, management and support need to be as efficient as possible.

    Most importantly, the solution needs to be as simple and intuitive as consumer-oriented products to ensure broad adoption among users. Sigma can help you evaluate and deploy an enterprise-class file-sharing solution that balances ease-of-use with security and regulatory compliance requirements.

  • While cloud computing has delivered tremendous business value to organizations of all sizes, recent data specific to midmarket companies illustrates the benefits of steady adoption based upon strategic planning. According to a recent Deloitte study, the cloud has made enterprise-class technology more accessible, economically feasible and less risky for midmarket organizations.

    56 percent of midmarket IT executives are using cloud-based services, and 53 percent say the cloud makes their companies significantly more competitive. These companies have leveraged the cloud to increase productivity, reach new customers and strengthen the company culture. Because business applications, data and services are available from anywhere on virtually any device, employees can better understand and more quickly respond to the needs of clients and prospects.

    Another survey from Evolve IP revealed that nearly nine out of 10 midmarket IT professionals believe cloud computing is the “future model for IT.” These companies are using an average of 2.5 cloud-based services, and 75 percent of respondents plan to move more services to the cloud within the next three years.

    70 percent say the cloud has led to greater flexibility and scalability, as the cloud supports an increasingly remote workforce that is no longer tied to an office or computer. It also enables IT to easily add new applications, services and users without purchasing new hardware, creating a more scalable infrastructure. Additionally, 60 percent of cloud users report improvements in disaster avoidance and business continuity, thanks to offsite data backup that enables users to access applications and data with minimal or no disruption to business operations.

    Although the cloud is delivering on its promise to improve productivity, customer service, flexibility, scalability and disaster preparedness, some midmarket executives are struggling to balance these benefits with the risks of cloud computing. These risks include security, trusting a third party to sensitive data, application performance and reliability, and regulatory compliance. In fact, nearly 40 percent of midmarket IT executives haven’t deployed cloud-based services due to concerns about data privacy and security, according to the Deloitte survey.

    Before implementing a cloud computing solution, there are certain factors that should be considered in order to maximize the benefits and minimize risk:

    • Identify how the cloud will support and improve upon your organizations’ processes and goals.
    • Determine which specific departments and job functions would benefit the most from cloud computing.
    • Assess your existing IT infrastructure to determine how complicated a cloud deployment might be.
    • Determine which applications can best take advantage of cloud services — for example, those that require intermittent bursting such as e-commerce or marketing campaigns.
    • Evaluate whether the cloud might provide the resources and scale to support a modern data center architecture such as Hadoop or MongoDB.
    • Think about how quickly your organization is changing and expanding, and how the cloud can facilitate this evolution.

    Most organizations don’t have the in-house expertise or resources to adequately address each of these considerations. This can lead to cost overruns, delays and less-than-optimal performance. Let the Sigma Professional Services team conduct a cloud-readiness assessment, develop a strategic deployment plan, and leverage partnerships with respected cloud providers such as Rackspace and SunGard to help you take full advantage of cloud computing.

    Posted in: cloud
  • Despite the efficiencies gained through technological advances and hardware consolidation in recent years, research from IDC shows that the old 80-20 rule still applies to most IT departments: 76.8 percent of time and resources are devoted to maintaining the environment, while the remaining 23.2 percent are spent on strategic initiatives that deliver actual business value.

    How is this possible? Various components of the IT infrastructure are still managed in technological and organizational silos. Silos drive up costs because provisioning, deploying and updating new solutions require more time and personnel. Added layers of complexity and compatibility issues make the environment less flexible and more difficult to operate and scale. Virtual server sprawl creates performance issues and administrative headaches, which often lead to unnecessary upgrades and overprovisioning.

    Converged infrastructure simplifies the IT environment by delivering compute, networking, storage access and virtualization resources in one preconfigured, pretested solution. The reduced complexity of a converged infrastructure that shares the same pool of resources brings a number of benefits:

    • Simplified, central management and maintenance. Administrators control a converged infrastructure through a single management console. Training requirements are reduced, and IT has a single point of contact for support, even if the solution includes components from more than one vendor.
    • Lower costs. More efficient cabling, lower power and cooling requirements, fewer maintenance contracts, higher resource utilization, and a smaller footprint with fewer moving parts make a converged infrastructure less expensive to operate.
    • Faster deployments. A converged infrastructure is typically up and running in days as opposed to months. Manual configurations and errors that commonly cause delays are replaced with a fully automated, orchestrated solution.
    • Less risk. Because a converged infrastructure is preconfigured and pretested for various workloads, there is a much lower risk compared to building an IT infrastructure from the ground up. Performance is much more predictable.
    • Easier scalability. Provisioning equipment, applications and services is simpler and faster, and changes can be made seamlessly without disrupting the rest of the IT environment.
    • Improved business agility. Because time-draining silos are eliminated and orchestration software makes it easy to add new solutions, IT can more quickly adapt to evolving business priorities and market conditions.

    It’s not uncommon for organizations to take a best-of-breed approach to IT, choosing the best hardware, software and services from various vendors with the goal of creating an IT all-star team. While this approach will arm organizations with world-class players, managing those players and getting them to play nicely together is a major challenge. With a converged infrastructure, someone else has already assembled a cohesive team for you – a team that’s ready to hit the field from day one.

    To determine the best path forward, you need to understand what types of changes converged infrastructure will bring to your organization – technically, operationally and culturally. How will it affect your existing IT environments? How will roles change? Will it be difficult to get your team to embrace these changes? Is your current infrastructure aligned with your business processes and goals, or is it time for a change? Sigma Solutions can help you answer these questions so you can take advantage of a more efficient, easy-to-manage IT environment.

  • According to a Frost & Sullivan survey of midmarket companies, keeping up with new technology is the biggest IT challenge organizations are facing today. This is an obstacle that goes beyond staying abreast of the latest innovations. Organizations of all sizes are struggling to choose and implement the kinds of IT solutions that create true competitive advantage.

    That’s because many IT organizations are stuck in the old 80-20 rut. As much as 80 percent of IT resources continue to be dedicated to managing and maintaining existing and often outdated technology, while only 20 percent are spent on strategic initiatives that boost productivity and revenue. Instead of spurring growth, technology is causing many organizations to remain stagnant.

    More and more organizations are adopting a managed services model, in which an IT service provider remotely manages the organization’s IT processes. Managed services include network monitoring, data backup management, server maintenance, security and patch management, tech support and other services.

    A managed services provider (MSP) can help organizations optimize their IT environments. Benefits of this model include:

    • More efficient operations. An MSP can help you streamline your IT processes, dramatically reduce maintenance and support costs, and remove layers of complexity in order to deliver critical services more quickly and effectively.
    • Better use of in-house IT resources. Most organizations have limited IT budgets and personnel. Utilizing managed services allows these organizations to dedicate IT resources to strategic growth initiatives and outsource time-consuming, day-to-day tasks.
    • Greater predictability. MSPs use automated tools that minimize human error. At the same time, best-in-class MSPs employ best practices that ensure critical maintenance is performed regularly.
    • Improved network resilience. Managed services can help detect and remediate problems before they cause downtime. Sophisticated security tools are closely monitored to thwart cyber attacks, while remote backup and disaster recovery solutions can ensure access to mission-critical applications and services should disaster strike.
    • Fewer compliance headaches. Industry regulations are constantly evolving and many requirements are becoming more stringent. A recent study by Six Degrees Group found that more than half of IT professionals would prefer to outsource data compliance to an MSP. An MSP can help ensure regulatory compliance by monitoring for events that could result in downtime or data loss.
    • Faster implementation of new tools and services. An MSP typically has a level of expertise that few in-house IT departments can match, as well as the resources to evaluate the latest IT solutions. The MSP can serve as a “virtual CIO,” helping you to deploy new solutions quickly and ensure they’re aligned with your organization’s business processes and goals.
    • Data for improved budgeting and decision-making. Based upon data gathered by remote monitoring and reporting tools, organizations can fine-tune their budgets and be prepared to ramp up or scale back services as business needs evolve.

    To take full advantage of the benefits of managed services, IT should determine which tasks can be automated using remote tools. These tasks are ideal candidates for outsourcing to an MSP. Look for a provider who has extensive experience and offers a blend of managed and professional services that can be strategically combined to meet your specific business needs.

    Sigma OneSource is an enterprise-class managed services solution based upon industry best practices and our proven methodology. Let Sigma help you assess the state of your IT operations and determine how a managed services model can deliver the most value to your organization.