The views expressed in the posts and comments of this blog do not necessarily reflect those of Sigma Solutions. They should be understood as the personal opinions of the author. No information on this blog will be understood as official.
The typical data center supports multiple networks — one for data and applications, one for storage, and perhaps another for server clustering. As such, servers must feature multiple network adapters that fulfill the I/O requirements of each function. What’s more, servers commonly have dedicated interfaces for management, backup or virtual machine live migration.
Supporting all of these interfaces contributes significantly to data center complexity and imposes significant costs related to cabling, rack space and upstream switches. In addition, the rat’s nest of cables and connections required for each different function makes it harder to cool the data center and contributes to rising power costs.
That reality has led to efforts to consolidate storage and data networks onto Ethernet and IP protocols. Traditionally, storage-area networks (SANs) have used Fibre Channel technology, which was designed to provide reliability and performance for storage networking. But because Ethernet is used throughout the data center, Ethernet-based iSCI emerged as a popular choice for those who didn’t want a separate network or protocol just for storage.
Other options include Fibre Channel over Ethernet (FCoE) and its cousin Fibre Channel over IP (FCIP), each of which encapsulate Fibre Channel frames for transmission over data networks. Proponents of these protocols tout a number of benefits, including lower operating costs, ease of deployment, scalability, low latency and high performance. But while each has found niche implementations, neither has gained widespread traction.
Although many organizations see value in consolidating storage and data networks, Fibre Channel still reigns supreme as the storage networking technology of choice for mission-critical applications. The main reason is that Fibre Channel is lossless. Packets are never dropped and are always received the first time, which allows for consistently high performance and low latency. Other advantages of Fibre Channel include seamless scalability, compatibility with equipment from multiple vendors, and Quality of Service that prioritizes the most important applications.
That’s not to say that all Fibre Channel networks are up to the demands of today’s storage infrastructure. In many environments, older Fibre Channel switches lack the performance needed to support virtualization and increasing use of solid-state disks (SSDs). That’s why many organizations are planning to upgrade their Fibre Channel networks.
Each generation of Fibre Channel technology has doubled the speed of its predecessor. Gen 5 Fibre Channel helps organizations meet growing performance demands by delivering 16Gbps speeds. And earlier this year the Fibre Channel Industry Association ratified Gen 6, which will double SAN speeds to 32Gbps and provide enhanced security and improved energy efficiency. Gen 6 products will be available in 2016.
If it’s time for a storage network upgrade, Sigma can help. Let us show you how the latest Fibre Channel technology can ensure the highest levels of performance and reliability for your mission-critical applications.
Gartner estimates that data volume will grow 800 percent during the next five years. 80 percent will be stored as unstructured data, which includes files such as emails and images that don’t reside in a traditional database format. The explosive growth of unstructured data, which doubles in volume every three months, is creating a major headache for IT managers.
Distributed IT environments with multiple remote sites place heavy demands on IT resources, especially storage. If you follow the traditional approach and constantly add storage capacity, you’ll blow up your budget. At the same time, managing, securing and storing a never-ending flow of unstructured data across a number of branches often leads to fragmented resources and poor utilization. From a user perspective, it can be difficult to simply remember a file name and where that file was stored.
Another major challenge is regulatory compliance, with requirements that are constantly changing and tend to become stricter with every high-profile security breach. Thousands of new pieces of legislation are pending, especially in heavily regulated industries. Organizations are scrambling to get a handle on new mandates in order to avoid a spike in data management costs, heavy fines and a tarnished reputation. While structured data is relatively well-defined and easier to navigate with traditional applications, unstructured data introduces a massive layer of complexity that is difficult to manage.
Organizations can be better equipped to manage the rapid growth of unstructured data by implementing a data governance program. Data governance refers to an organization’s data management strategy and processes. Components of data governance include:
- Identifying owners of data assets and parties responsible for ensuring that data is accurate, accessible, consistent, complete and updated.
- Establishing processes for data storage, archival, backup and security.
- Developing procedures that govern how data should be used by authorized parties.
- Creating procedures to ensure compliance with industry regulations.
A data loss prevention (DLP) strategy is a critical component of data governance. DLP involves the policies and software that are used to prevent sensitive company data from leaving the network and to detect a potential breach. Managers create business rules that are used to tag sensitive data, such as intellectual property, and deny the intentional or accidental disclosure of that information. There has been a growing demand for data governance and DLP due to insider threats and strict privacy laws that have rigid requirements for data protection and access.
DLP should be integrated with both standard and advanced security measures as part of a robust security infrastructure, which is managed and audited according to a defined security policy. Standard security measures include firewalls, intrusion prevention systems, antivirus software and threat management systems, which locate, prioritize and track security patches and fixes. Advanced security measures provide greater protection by monitoring network traffic, conducting additional user verification procedures and recognizing abnormal system behavior. There are also solutions designed specifically for DLP that will detect and block efforts to expose certain data, which can be valuable if an authorized party attempts to access data that falls outside of their user profile.
Let Sigma Solutions show you how a carefully planned data governance program and DLP can guide the management of unstructured data, minimize risk and keep your sensitive data safe.
The Business Case for Deep Data Archival - 2014.11.06
Deep data archival, the process of storing data in a separate system for long-term preservation, is typically viewed as one of the more mundane IT tasks. However, with the explosion of data volume and the resulting need for additional storage capacity, efficient data archival has never been more important. While storage demands are quickly increasing, budgets are not.
A deep archive stores data that never changes and is rarely if ever accessed. This data is traditionally stored for historical, regulatory or legal purposes. Although files in a deep archive may never be accessed, they must be accessible and able to interface with current IT infrastructure.
The benefits of developing a deep data archival strategy include:
- Simplified storage management. Archival applications allow administrators to migrate data away from primary storage based upon metadata information. As a result, this data won’t clog searches, reporting and backups or complicate management of the primary storage environment.
- Improved storage efficiency. Deep data archival enables you to reserve primary storage capacity and high-performance storage hardware for mission-critical data.
- Cost control. Deep data archival reduces the need to add storage capacity to house data that may never be accessed.
- Data governance. Deep data archival allows organizations to meet data governance standards for managing data and satisfying regulatory compliance and eDiscovery requirements.
Creating a deep data archival policy begins with identifying data that should be archived and for how long. This is typically based upon how long it has been since the data was last accessed or modified. You also need to determine if certain types of data should simply be deleted after a certain period of time. For example, should the data be stored in your deep archive forever, or should it be purged after a few years? What are the legal and regulatory compliance requirements?
Once your policy has been established, you need to determine what software should be used to carry out the archival process, what type of media will be used for archival and what applications will be used to locate the data when needed. When planning your deep data archival environment, you need to consider:
- the amount of data that will be archived, and how much you expect that amount to grow each year
- how often the data will be accessed for reports, compliance, e-Discovery and other purposes
- the type of media (disk, tape, etc.) that is best-suited to satisfying your archival needs
When designing a deep data archival solution, you need to choose a scalable platform capable of maintaining the integrity and security of your stored data. The platform should have de-duplication, compression and encryption capabilities. Also, the search and retrieval functions of your deep data archival system are critical. If it takes hours to find and access data, the performance won’t justify your investment, and you’ll run the risk of missing deadlines for regulatory compliance and eDiscovery.
There are three general approaches to deep data archival:
- On-premises solutions can be a wise choice for organizations that have proprietary data and technology that they would prefer not to move offsite.
- Cloud platforms enable organizations to essentially outsource deep data archival and achieve a level of scalability that is difficult to achieve with an on-premises platform.
- Hybrid approaches enable organizations get the best of both worlds – the performance of locally stored data and the redundancy of offsite data for disaster recovery purposes.
Let Sigma help you design a deep data archival solution that combines the right mix of on-premises and cloud technology to support your business and IT requirements.
What You Should Expect from Cloud SLAs - 2014.11.03
The cost of data center downtime is on the rise. A recent Ponemon Institute study of data centers based in the United States found that unplanned downtime costs approximately $7,900 per minute, a 41 percent increase from the 2010 survey.
But what if your organization relies heavily on the cloud? It’s your cloud service provider’s job to worry about your IT infrastructure, and the service level agreement (SLA) guarantees 5 9s uptime. That means you only have to worry about a small fraction of a percent of downtime, right?
While cloud SLAs typically include some type of “uptime guarantee,” some downtime is virtually inevitable. After all, 99.95 percent uptime allows for more than four hours of downtime each year, and all the major cloud providers have had incidents of unplanned downtime. The cloud SLA simply spells out how the cloud service provider will compensate you for downtime, which typically involves some kind of credit on your monthly bill.
However, keeping your data center onsite doesn’t make you immune to outages. On the contrary, onsite IT resources are typically more likely to experience downtime than cloud services. The Ponemon study found that 91 percent of data centers had experienced an unplanned outage in the previous 24 months, with an average incident length of 86 minutes.
Cloud computing providers minimize downtime through enterprise-class technology, sophisticated management tools and a large IT staff. A fault-tolerant environment with redundancy and failover capabilities ensures high availability, while around-the-clock monitoring of your cloud services reduces the impact of an outage.
Still, it is important to set the right expectation and understand that you are trading one set of risks for another. Rather than promising zero downtime, cloud SLAs help to ensure clarity and transparency.
The Cloud Standards Customer Council offers the “Practical Guide to Service Level Agreements” which is designed to help organizations develop cloud SLAs that satisfy their business needs. Anyone considering cloud services should read this document.
A cloud SLA should be developed cooperatively and document expectations, define responsibilities, eliminate confusion and protect your interests. It should include very specific parameters and protocols for availability, performance levels, security, storage and backup, troubleshooting, updating of cloud services, managing disputes and how cloud services can be seamlessly shifted to a new provider. It can also include guidelines for maintaining regulatory compliance, particularly in the medical, financial and retail industries.
Sigma has partnered with leading cloud providers to deliver some of the most robust cloud services available on the market today, backed by Sigma’s local engineering resources. Let us help you to develop a customized cloud strategy with SLAs that meet your specific business requirements. Sigma also offers a Cloud Readiness Assessment: a professional services engagement that provides a tool-based evaluation of your current environment including IT operational and infrastructure processes and procedures.
Bring-Your-Own-Application: The Benefits and Risks - 2014.10.27
The bring-your-own-device (BYOD) phenomenon shows no signs of slowing down, even as IT managers struggle to manage and secure employee-owned smartphones and tablets in the workplace. Most organizations have realized that banning the use of personal devices doesn’t work. Employees are using them anyway. In fact, Gartner predicts that nearly four in 10 organizations will stop providing employees with devices by 2016, relying exclusively on BYOD.
This trend has extended to applications. BYOD has led to BYOA (bring-your-own-applications) as more employees use third-party cloud applications to do their jobs. The presence of consumer-grade applications such as Dropbox and Google Docs is growing because employees prefer to use devices and applications that they know and like. In many cases, employees feel tools provided by employers are inadequate or outdated.
On the plus side, BYOA often increases employee productivity and engagement. It gives the employee the flexibility to choose the application that is best-suited to help them do their job. Because employees use familiar applications, minimal training and support are required and employees can hit the ground running. BYOA also reduces capital expenses for software and licensing.
However, BYOA brings a number of risks to the organization, especially when employees use rogue applications without informing IT. Security and compliance are the two biggest concerns because corporate data stored on a third-party application is difficult if not impossible for IT to control or protect. The vast majority of these applications were designed for consumer use, not enterprise use, and are typically incapable of ensuring data privacy and security.
If a device is stolen, device data can be wiped, but data stored in a third-party cloud application cannot. If data hasn’t been backed up, it may never be recovered. According to a study from Intralinks Holdings and Gigaom Research, 46 percent of senior IT professionals believe unmanaged file sharing platforms are causing company data and intellectual property to be compromised.
Unmonitored employee downloading or misuse of applications can also cause network traffic congestion, which can degrade the performance of business-critical applications and processes. This can also make organizations vulnerable to hackers who imbed malware in applications and present them as legitimate software.
There are steps organizations can take to maximize the benefits and minimize the risks of BYOA. Take inventory of applications currently being used, especially those that access and share corporate data, and determine which ones can be secured and managed. Consolidate applications whenever possible so different groups of employees, for example, aren’t using three different applications that do the same thing.
Develop an acceptable use policy that identifies approved applications and how they should be used. Explain risks and consequences of failing to adhere to the company policy, and use firewalls to block banned applications. Many organizations have created private app stores to deliver approved business applications to mobile devices. Gartner predicts that 60 percent of organizations will implement such a platform by 2016.
Finally, instead of being overly restrictive and increasing the risk of employees going rogue, encourage input from employees. Find out what they like about certain applications and consider developing customized applications with similar capabilities so these applications can be properly managed. Create a policy that enables employees to suggest new applications to IT and make a case for allowing them to be used.
OpenStack Taking the Cloud by Storm - 2014.10.17
OpenStack may be unfamiliar to many IT professionals, but it is rapidly becoming a significant force in cloud computing. Launched four years ago by NASA and Rackspace, OpenStack is an open-source cloud operating system that enables the centralized management and user provisioning of large, scalable pools of compute, storage and networking resources. The OpenStack market is growing rapidly — 451 Research recently projected that OpenStack will generate $3.3 billion in revenue by 2018, up from just $600 million in 2013.
According to Michael Cote, research director, 451 Research: “This growth is driven by both public and private cloud usage with private cloud getting much of the enterprise’s attention. These companies are interested in the agility benefits of cloud but also want single-tenant, private cloud deployment models.”
OpenStack enables organizations to quickly and easily deploy cloud systems, introduce new services and respond to changing market conditions. Different systems are available for private, public and hybrid clouds, all of which can be highly customized according to business needs, thanks to the open-source nature of the system.
Like other open-source projects, OpenStack benefits from the contributions of the development community. The OpenStack community is made up of software developers and cloud computing experts around the world who collaborate to make cloud services accessible on commodity equipment. The ninth release of OpenStack, code-named Icehouse, became available in April, with new features that reflect community-wide efforts to continue to improve the system.
OpenStack is also gaining the attention of the vendor community. HP, Cisco and VMware are among the industry leaders that support OpenStack or offer OpenStack-based products. As an OpenStack cofounder, Rackspace is a top contributor to the OpenStack community and now runs one of the world’s largest OpenStack-powered clouds.
Rackspace just announced a new release of its Private Cloud offering powered by OpenStack Icehouse. Designed to run enterprise production workloads, Rackspace Private Cloud delivers the agility and efficiency of a public cloud combined with the enhanced performance, security and control of a dedicated environment. Benefits include:
- Scalability. Rackspace Private Cloud is designed to scale to hundreds of nodes with consistent performance.
- Availability. It includes a 99.99 percent OpenStack API uptime guarantee, meaning that OpenStack will be highly available to applications running in the cloud. Rackspace clearly believes that OpenStack is mature and reliable.
- Application-Level Automation. Rackspace Private Cloud supports OpenStack Orchestration (Heat), which enables the automated provisioning of infrastructure, services and applications.
- Hybrid Cloud Options. Customers can use the RackConnect hybrid cloud solution to securely connect the Rackspace Private Cloud to the Rackspace Public Cloud.
As a Rackspace Strategic Partner, Sigma can help you take advantage Rackspace cloud solutions backed by Sigma’s certified and experienced engineers. Let us help you determine if Rackspace Private Cloud and the OpenStack platform are right for your cloud initiatives.
Relieving Complexity in Data Protection - 2014.09.15
Data backup may not be very glamorous, but it is arguably the most critical technology within the data center. Organizations need foolproof solutions to protect mission-critical data and enable rapid recovery in the event of disaster or system failure.
Yet backup continues to be a pain point for many organizations. In many organizations, backup technologies and processes have not kept pace with growing data volumes and increasing virtualization, making it difficult to complete backups within the available window. And should disaster strike, organizations may find that recovery is challenging and excruciatingly slow.
A recent study by IDC found that many organizations are facing backup complexity within their heterogeneous environments. Almost 37 percent of organizations have to simultaneously back up virtual, physical and cloud-based servers. Of those that are managing virtual infrastructures, 54 percent have to manage two or more different hypervisors.
These challenges are exacerbated by the fact that many organizations have multiple backup solutions within their environments. Point solutions for VM backup and de-duplication add to the complexity and create integration nightmares. As data volumes continue to mushroom and 24×7 availability becomes the rule, IT organizations are struggling to keep up. Legacy backup systems become roadblocks that impact IT’s ability to deliver services and meet increasingly stringent SLAs.
Due to these headaches, backup responsibilities are being pushed outside of traditional backup administration roles and onto database administrators, virtualization managers and others within the IT organization. Such fragmented processes only add to the confusion and complexity.
Organizations need a holistic data protection platform that can accommodate the wide range of workloads within today’s data center, including physical and virtual servers, networked storage and arrays, and big data. The backup solutions should deliver improved performance, reliability, manageability and scalability coupled with reduced complexity and lower total cost of ownership.
There are a number of mature and emerging technologies that can help organizations overcome their backup challenges. We continue to see strong demand for purpose-built backup appliances that combine software, storage arrays, a server engine and de-duplication. The latest solutions can be tightly integrated with backup software, and deliver the performance and scale to support thousands of VMs.
But data protection is as much strategy as it is technology. Organizations need to implement policies and processes around storage tiering and data archival to reduce the burden on primary storage and backup and recovery systems. Cloud-based and hybrid solutions, such as Disaster-Recovery-as-a-Service can also help to relieve bottlenecks and improve data protection.
IDC predicts that storage will increase 50x over the next 10 years, but IT staffing is only expected to grow .5x over the same timeframe. Organizations needed to rethink their backup environments and implement a unified approach that simplifies management and delivers performance and scale. It may not be glamorous but it’s absolutely critical.
Sigma’s engineering team has the know-how and experience to architect an end-to-end data protection strategy. We are helping organizations do more with less while maximizing the value of their existing IT investments.
Tapping the Benefits of the Hybrid Cloud - 2014.09.11
Cloud computing can deliver reduced capital and operational costs, increased agility and simplified IT management, enabling IT personnel to focus on strategic initiatives rather than keeping the lights on. But security and data privacy risks remain obstacles to public cloud deployment, and many organizations are concerned about loss of control to a third party, cloud application performance and regulatory compliance issues.
Private clouds help address these concerns while enabling organizations to leverage the technical benefits of cloud computing. Applications and data remain squarely behind the firewall, while the data center becomes more flexible and scalable. However, implementing a private cloud can be challenging and requires a higher investment than public cloud solutions.
Enter the hybrid cloud.
As the name suggests, a hybrid cloud enables organizations to have certain services and applications managed externally on a public cloud and others managed internally on a private cloud. This makes it possible to keep mission-critical applications and sensitive data close to the vest while leveraging the efficiency and flexibility of the public cloud for services such as data archival.
A hybrid cloud isn’t the same as using public and private cloud services simultaneously. In a true hybrid cloud, the public and private clouds are integrated to allow IT to easily migrate workloads in order to optimize the environment. A single interface streamlines the flow of data between the public and private clouds and creates a consistent end-user experience.
A hybrid cloud must be managed with as much rigor as a private cloud and traditional data center solutions. The key is to minimize the design differences between the public and private cloud environments so a centralized management strategy can be applied to the hybrid cloud as a whole with as few adjustments as possible.
A hybrid cloud management strategy should cover:
- Best practices for configuration, change control, patch management and implementation.
- Security, including the encryption of data during transmission and at rest, access controls, firewalls and policy enforcement.
- Device fault monitoring and performance alerts, which should be centrally managed.
- Budget controls, including alerts for both unused resources and charges that exceed certain levels.
- Capacity planning and provisioning for both the onsite data center and the public cloud.
- Data classification to ensure that the most sensitive data remains in the private cloud.
Network World has declared 2014 to be the year of hybrid cloud adoption, and Gartner predicts that half of mainstream enterprises will have a hybrid cloud by 2017. Nevertheless, a change in culture will likely be necessary for widespread hybrid cloud adoption. Successful implementation requires not only different skills and expertise but a different thought process than traditional IT infrastructure.
As IT continues its transformation from technical asset to strategic business asset, cloud services must be evaluated for their ability to improve business processes and user experiences, not solve technical problems. A strategic approach is the key to taking full advantage of a hybrid cloud.
55 percent of respondents to Computerworld’s 2014 IT Salary Survey said they communicate frequently or very frequently outside of business hours, including when they’re on vacation. According to the TEKsystems Stress & Pride survey, 41 percent of IT professionals said they’re expected to be available around the clock. 38 percent are accessible only during traditional business hours, and 21 percent fall somewhere in between 9 to 5 and 24×7.
Clearly, today’s always-on business mentality, which requires always-available support, has led to an around-the-clock IT culture. This level of IT accessibility may be necessary if you expect to stay relevant and competitive, but delivering and maintaining 24/7 responsiveness is a tall order.
First, the complexity of today’s IT environments makes it difficult to find, hire and train IT professionals who are capable of taking a call at 2 am and quickly solving the problem. Organizations that have workforces and customers dispersed across the globe must have the same level of support on Sunday at midnight that they have during regular business hours. This can quickly turn into a costly proposition.
When IT is focused on fielding and responding to support requests at all hours, it becomes virtually impossible to escape the old 80/20 ratio. 80 percent of IT’s time is spent on routine maintenance, and only 20 percent is left over for innovation. Having 24×7 availability is largely wasted when so little time can be spent developing new services and solutions that create competitive advantages.
Also, consider the heavy burden placed upon the collective shoulders of your IT department and how it impacts their productivity. This pressure results in a phenomenon called presenteeism, which occurs when an employee is physically present but not performing at optimal levels, usually due to stress, depression or exhaustion. Studies have shown that presenteeism negatively impacts productivity more than absenteeism.
Many organizations are supplementing their in-house IT departments with outsourced managed services to meet the demands of the around-the-clock IT culture. By turning over day-to-day maintenance tasks, support and other responsibilities to a managed services provider, organizations can take advantage of a number of benefits.
- Staffing relief. Instead of hiring and training additional staff to ensure 24×7 availability, let the managed services provider take on this responsibility. For a monthly fee, you’ll have access to a team of IT experts who are using the latest hardware and software.
- More innovation. Outsourcing routine maintenance and support is the first step toward reversing the 80/20 ratio. Let your in-house IT department focus on strategic growth initiatives that improve business agility and set your organization apart from the competition.
- Lower IT costs. It’s basic math. Utilizing managed services is much more cost-effective than adding staff, and it may enable you to streamline your IT infrastructure.
- Greater IT job satisfaction. IT doesn’t want to spend all of its time keeping the lights on. They want a challenge. A managed services provider helps your IT staff make a difference without the burnout that results from being on call 24×7.
- Additional perks. Depending on the managed services you choose, your organization could also benefit from improved security, increased storage performance and capacity, less unplanned downtime, and improved disaster recovery planning.
In Part 1 of this post, we introduced Cisco Application Centric Infrastructure (ACI), a transformational approach to IT that many industry experts claim is among the most disruptive data center innovations in a generation. Cisco ACI offers a breakthrough solution that meets the agility demands of the modern enterprise in today’s application-driven business environment.
Cisco ACI speeds application deployment cycles from months to minutes, breaks down silos to create a single point of management for all administrators in both physical and virtual networks, and boasts an open ecosystem of partners working together to drive innovation and deliver maximum value to enterprises.
The core technology upon which Cisco ACI is built is as impressive as the benefits it delivers. This technology includes:
The Nexus 9000 Switch Family. Serving as the foundation of Cisco ACI, the new Nexus 9000 switches provide both modular and fixed 10/40/100 Gigabit Ethernet switch configurations. This allows enterprises to seamlessly transition from traditional NX-OS to the new ACI mode NX-OS, which leverages the ACI application’s policy-driven services and automation capabilities. Designed with both merchant silicon and custom ASICs from Cisco, Nexus 9000 switches provide improved performance, scalability, security, virtualization support, programmability, and power and cooling efficiency.
Cisco Application Policy Infrastructure Controller (APIC). This new appliance is at the core of automation and management for the ACI fabric, bringing together physical, virtual and cloud infrastructure management in a common, open framework. This open architecture allows for the integration of third-party Layer 4 through 7 services, virtualization and management. Cisco APIC optimizes performance and provides centralized, system-level visibility and application-level control based upon defined application network profiles, which are used to expedite the provisioning of network resources.
Cisco Application Virtual Switch (AVS). Specifically designed for Cisco ACI and managed by Cisco APIC, the Cisco AVS enables intelligent policy enforcement and optimal traffic steering while enhancing application visibility and performance.
Cisco Adaptive Security Virtual Appliance (ASAv). This is the first transparently integrated, application-based security solution, providing consistent security across both physical and virtual environments.
40G BiDi Optics. This innovation allows enterprises to avoid massive fiber overhauls as they move to 10/40G. 40G BiDi makes it possible to maintain existing 10G cables, resulting in significant labor and fiber cost savings.
Cisco ACI is a direct response to the need for greater business agility, which can only be achieved through an application-centric, unified operational model. Let’s discuss how you can transform your data center, simplify IT management and unleash your applications quickly and efficiently with Cisco ACI.
App Integration Critical with Growing Cloud Adoption - 2014.07.31
A new study by Osterman Research found that the average small to midsize business (SMB) is using 14.3 cloud-based applications. By some estimates, workers are using 10 times more cloud apps than IT thinks — so if you’re aware of 30 cloud apps being used in your organization you’re probably looking at 300.
Cloud-based applications are faster to deploy, simpler to use and have a lower upfront cost than traditional enterprise applications. As a result, many organizations are using cloud-based applications to become more adaptive to business conditions and responsive to customer demands. However, cloud adoption is in many ways impeding those goals.
These tools by nature exist outside the IT infrastructure and, as a consequence, outside of the general flow of data among business processes. Unless organizations take steps to integrate cloud and enterprise applications, they wind up with application “silos” that impact productivity and limit the economic value of the software investment.
Enterprise application integration (EAI) has its roots in the 1990s when many companies started buying packaged software solutions that automated specific business processes. These systems created silos of automation that produced redundant information and became problematic when common data changed — changes to data in one application would not necessarily be reflected in the other. Organizations began searching for ways to integrate these disparate systems in order to automate business processes that spanned them.
EAI is still very relevant today. Information consumers are demanding that data be made available to them regardless of its structure or distribution across the enterprise. The cloud is simply increasing the importance and changing the nature of EAI.
Cloud-based applications deliver rapid business benefits without the burden on IT to manage and maintain both the application and the underlying IT infrastructure. But not every application can move to the cloud, so most organizations end up with a hybrid environment that requires the integration of cloud-based apps with traditional applications and data sources. Whether it’s on-premises or in the cloud, an application has to support the organization’s business processes. As a result, application integration has become critical.
Unfortunately, cloud integration has been hindered by a lack of complete integration tools. Traditional EAI solutions provide the functionality large enterprises need to integrate complex enterprise applications but lack the speed and simplicity that’s desirable for cloud deployments. Custom code offers a relatively low upfront cost but is time-consuming to develop and costly to maintain over the long term.
The drawbacks of these solutions are driving strong interest in hybrid cloud solutions. In the hybrid cloud, organizations can tap both public and private cloud services via a single interface that streamlines the flow of data and creates a consistent user experience. Integrated Platform-as-a-Service (iPaaS) supports hybrid cloud application integration with tools that enable applications to communicate and share data sources.
Sigma Solutions has developed strong relationships with leading cloud providers to complement the skill sets of our engineering team. We are helping organizations maximize the value of the cloud by developing strategies for integrating cloud services into the overall IT infrastructure.
How DCIM Can Optimize the Data Center Spend - 2014.07.28
We’ve all heard the statistic. 80 percent of the IT budget is used to keep the lights on in the data center. That’s not just the data center budget. That’s 80 percent of the entire IT budget. Only 20 percent goes to creating any business value from technology investments.
There is a legitimate concern that the 80 percent figure could easily rise as data centers become denser and more complex. While hardware is being designed to reduce the data center footprint, this equipment still needs to support more users, more devices and more data.
Simply put, organizations need to get a better handle on optimizing their data centers.
This concern has led organizations to look more closely at data center infrastructure management (DCIM), a broadly used term that may mean different things to different people. In general terms, DCIM is the concept of managing the data center environment as a whole to ensure optimization and cost efficiency.
DCIM was introduced as part of the green IT movement and the desire to control power and cooling costs. In fact, one Gartner analyst claims organizations can recoup the cost of DCIM tools in three years on power and cooling savings alone. Today, DCIM has been expanded to include asset management, capacity management and data center monitoring. While various tools are capable of handling some of these tasks, the goal of DCIM is to optimize data center cost and performance by centralizing management functions in one cohesive system.
DCIM enables IT to assess the existing data center infrastructure and predict how changes or additions will impact the data center’s efficiency and performance. For example, a major concern today is capacity management. DCIM tools are capable of providing a virtual 3-D view of the data center, including hardware and cabling, as well as a dashboard view of capacity-related data. DCIM also can model how the placement of additional equipment will appear and assess how it will affect data center capacity.
Although they can deliver significant business value, DCIM tools are extremely complicated. However, the growing popularity of DCIM solutions points to the driving need to optimize the data center spend. Even if DCIM is out of reach, organizations should be looking at ways to streamline their data center operations.
Sigma One Source managed services were developed with the same goal as DCIM – to optimize the performance and efficiency of the data center. Our monitoring, management and support minimize downtime and unexpected expenses and relieve you of day-to-day administrative burdens. You can get a better handle on data center costs and operations, spend more time on business strategy and innovation, and worry less about maintenance. Contact us to learn more about how Sigma One Source managed services can help.
Applications drive business, from communication and collaboration to research and sales. This isn’t a trend to keep an eye on for the future. Instead, this has quickly become today’s business reality – so quickly, in fact, that IT managers are scrambling to keep up with this seismic shift to an application-centric business model.
Complexity is the biggest obstacle inhibiting IT’s response to this trend. New applications, upgrades and migrations can take months to deploy, making it difficult for IT to bring new products and services to new markets while managing risk, maintaining security and compliance, and meeting efficiency demands. At the same time, IT is expected to manage more and more applications in less time with fewer resources.
In order to stay competitive, businesses need data centers that enable greater agility without sacrificing security.
Cisco has introduced a new and potentially game-changing model for IT – Application-Centric Infrastructure (ACI). The ACI model is focused on empowering employees and dramatically improving agility and productivity through real-time application delivery. Based upon industry standards, Cisco ACI is a revolutionary data center and cloud solution that provides total visibility and a single point of management in both physical and virtual networks.
Cisco ACI responds to increasing demands for new applications by shrinking deployment cycles from months to minutes, thanks to innovations in software, hardware and systems, as well as a network policy model that is application-aware and leverages open APIs. By reducing the time required to provision, change or remove applications, Cisco ACI accelerates the pace of business, resulting in a 75 percent lower total cost of ownership compared to software-only network virtualization.
Cisco ACI knocks down silos, providing every administrator, regardless of their area of focus, with an identical view of an organization’s entire infrastructure. By combining all network resources – networking, storage, compute, applications and security – into one cohesive unit, Cisco ACI makes it easier to configure, troubleshoot and change IT components while maximizing application performance.
Cisco ACI is open technology, with an open ecosystem of partners that are collaborating to drive innovation, leverage existing IT investments and provide organizations with enhanced business agility. This diverse group of leading technology companies can use ACI’s open and extensible application policy model to ensure faster support of applications within the data center.
In Part 2 of this post, I’ll dig deeper into the technology that powers Cisco Application-Centric Infrastructure and how this breakthrough model works.
SDN and Big Data: The Perfect Marriage? - 2014.07.15
Most organizations have come around to the fact that Big Data can be used to drive business strategy. However, Big Data is primarily unstructured data that doesn’t fit into traditional database schemas, making it difficult to mine for value. As a result, vendors are working on technological solutions that enable organizations to search and query this data, extract the most important information, and gain the knowledge that can create competitive advantages.
Analytics software has been developed that enables organizations to search and query unstructured data, but this software requires significant server processing power. As a result, multiple servers are harnessed in a massively parallel application.
However, data must be transferred to the servers for processing, placing a heavy burden on network resources and creating a bottleneck that slows processing speeds. Studies have shown that data transfers account for more than half of the processing time in some instances. By relieving this bottleneck, the processing of Big Data can be accelerated and provide organizations with real-time analytics. This requires a network that can intelligently scale to meet the bandwidth demands of the data transfer.
Today, however, network provisioning and management is largely done manually, creating complexity and operational overhead even with relatively stable application and infrastructure requirements. Such a network environment creates major headaches when you attempt to support the changing workloads associated with server and storage virtualization. Big Data further amplifies that pain.
Software-defined networking (SDN) is being increasingly viewed as the approach best suited to support Big Data analytics. Because SDN decouples the control plane from the data plane, networks can be centrally programmed through a single controller to support Big Data demands. SDN enables IT organizations to create customizable, easily scalable and agile networks that enable servers to communicate efficiently to shorten wait times to speed Big Data processing.
According to research from IBM and Rice University, such network-aware applications have been estimated to decrease the time needed to complete critical Big Data operations by 70 percent. A separate study from Infloblox revealed that an SDN-aware version of Hadoop lowered a key benchmark by 40 percent when performed over an SDN network.
The performance gains are so significant that Big Data may become a catalyst for SDN adoption. Organizations are beginning to lean more heavily upon Big Data to provide value, guide strategic business initiatives and produce competitive advantages. SDN has the potential to meet the performance demands of Big Data applications, better utilize network resources and significantly reduce the amount of hardware required. This would make Big Data easier to digest and convert into revenue.
Why the Software-Defined Data Center Is the Future - 2014.07.10
As organizations have struggled to upgrade their IT infrastructures to support bring-your-own-device (BYOD) initiatives, cloud-based services, virtualization and big data, technology and management have become complex and inefficient. Hardware-focused IT environments lack the flexibility and agility needed to meet the demands of the modern business landscape.
Server virtualization has helped, but it can only go so far when the rest of the data center isn’t virtualized. Instead of adding to already complex networks in which silos and manual hardware management waste time and IT resources, organizations need a fresh approach – an approach that embraces automation, knocks down silos and shares resources in order to maximize efficiency and utilization.
One such approach is the software-defined data center (SDDC). All elements of the SDDC environment, including networking, storage, compute and security, are virtualized, abstracted from hardware, pooled, and delivered as a service. Instead of manually configuring each individual piece of hardware, administrators use intelligent software on a single console to configure policies for the entire network.
Essentially, you remove the software from the physical device and run it through a virtual device in the SDDC, creating a number of benefits:
- Improved efficiency and agility. Resources are automatically provisioned and deployed and workloads are automatically balanced according to programmed policies. As market conditions change, new applications can be up and running – and providing real business value – in a matter of minutes.
- Reduced costs. The sharing and automatic assigning of IT resources means these resources are better utilized, which can virtually eliminate wasteful IT spending while boosting productivity. Also, the SDDC uses commodity equipment that is less expensive and easier to maintain than proprietary hardware.
- Less time spent on routine maintenance. Because the IT team doesn’t have to spend time manually configuring individual devices, they can shift their attention to strategic initiatives that drive revenue and create competitive advantages.
- Greater flexibility. Organizations can utilize a public, private or hybrid cloud delivery model for the SDDC. And because SDDC software runs on commodity x86 servers, organizations can avoid being tied to a particular vendor’s equipment.
Switching to a SDDC doesn’t happen overnight. More than a certain kind of technology, the SDDC is a completely new way of thinking about how the data center is built, and how IT services are managed and delivered. As a result, organizations need to determine whether they have the capacity to support migration to the SDDC. Because most IT architectures include technology from a number of vendors, a multiple-virtualization and multiple-cloud management platform will help to simplify management for administrators. Finally, IT needs to understand configuration management in order to shift from manual to automatic provisioning of resources.
Disaster Recovery in the Cloud - 2014.06.25
Disaster can strike at any time and without warning, causing businesses to suffer downtime and data loss. The disruption to operations can be devastating. That’s one reason why 25 percent of businesses fail to reopen following a disaster, according to the Institute for Business and Home Safety.
Despite the risk, few organizations have an effective disaster recovery (DR) platform. Traditional DR environments require organizations to duplicate their entire production infrastructure and associated operational processes in an offsite data center. Because of the significant investments and operational overhead involved, fast and reliable DR has remained out of reach for all but the largest organizations.
Virtualization reduces the cost of setting up a DR site by minimizing the number of physical servers required for recovery and enabling data replication and failover across different types of equipment. However, it still requires organizations to purchase equipment and dedicate IT resources to maintain that equipment and manage the DR solution.
The cloud is helping to relieve these challenges. Cloud-based DR-as-a-Service (DRaaS) solutions provide a robust DR platform in a subscription-based offering. DRaaS shifts the overhead associated with DR to a third-party service provider, eliminating the need to acquire data center space and purchase hardware or software.
Expertise is another advantage of DRaaS. In addition to providing infrastructure, true DRaaS adds multiple layers of services, including DR planning, ongoing management and support. DR processes are handled by the service provider’s DR specialists, increasing confidence in the solution and allowing the customer’s IT resources to be redirected toward other initiatives.
Because DRaaS capabilities vary widely, organizations should do due diligence in selecting a service provider. Key considerations include
- Data center capabilities — Does the service provider’s data center have redundant power and communication links and adequate fire suppression?
- Geographic location — Is the service provider’s data center located in an area where earthquakes, hurricanes, tornados and other natural disasters are unlikely to occur?
- Remote access capabilities — Can personnel administer the site remotely if weather, pandemic or other circumstances prevent travel?
- Regulatory compliance — Is the data center SSAE 16 certified? Does the service provider follow applicable operational standards?
- Testing — Does the service provider conduct periodic testing of the DR plan?
- End-to-end support — Does the service provider monitor data replication processes and provide 24×7 support?
- Recovery SLAs — Does the service provider employ experienced personnel who can quickly activate and manage failover and failback processes?
It is also important to select a service provider with data centers far enough away that a regional disaster won’t affect both the production and DR site, yet close enough for effective data replication.
After selecting a service provider, organizations should use the migration to DRaaS to bring their DR plans into closer alignment with business objectives. Instead of focusing on protecting individual systems and data, IT teams can work with a knowledgeable DRaaS provider to develop an enterprise-level DR plan that considers critical dependencies within the environment.
More and more organizations are taking advantage of these benefits — Reportstack forecasts that the DRaaS market will grow more than 54 percent between 2014 and 2018. DRaaS offers an alternative to traditional DR that is less complex, faster to implement and more affordable, making it one of the most compelling cloud-based services available.
Why You Must Assess IT Operational Maturity, Part 1 - 2014.06.16
The CIO traditionally has been viewed as the person in charge of figuring out how to use technology to deliver service while controlling costs. The CIO was in charge of keeping the IT environment running, ensuring that users had computers and network access, and putting out every fire imaginable. The CIO often got all of the blame when things went wrong and none of the credit when things went right.
Today, the CIO’s role is evolving. IT is now a critical component of business processes that build revenue and create competitive advantages. Modern technology has automated many of the administrative tasks that previously required direct involvement from the CIO’s team. As a result, the CIO is being asked to not only keep the lights on but also to drive innovation.
This progression of IT from survivalist to strategic asset is measured through IT operational maturity. According to Gartner, IT Operational Maturity level is determined by assessing how effectively an IT organization has aligned process, technology, people and management. This assessment is used to create and implement an improvement roadmap, which consists of a series of initiatives that enable the organization to optimize business and IT operations and maximize the ROI from the IT infrastructure.
Many organizations struggle with IT inefficiency , inconsistent availability, lack of functionality in business applications, questionable security, complex management, and end-user dissatisfaction. IT operational maturity initiatives help to overcome these challenges and optimize the IT environment. As a result, IT is able to:
- reduce IT costs and risk
- improve user and customer experiences
- boost productivity
- enhance change management practices
- build strategic relationships with vendors
- enable greater innovation and agility as market conditions change
An IT Operational Maturity Assessment should begin with a clear understanding of what the business expects from IT. In other words, what is the role of IT in supporting business strategies and objectives? This will help shift the focus of IT from cost containment, maintenance and stability to innovation, business agility and improving the experience of the customers of your IT services. The assessment should have a unified approach, incorporating all components of an often complex IT infrastructure and strategy, with a goal of achieving clearly defined business objectives.
In a future post, we’ll discuss the Gartner IT Operational Maturity Level model and how an IT Operational Maturity Assessment can help organizations make better strategic decisions and become more successful.
As the amount of data being produced and transferred across corporate networks continues to skyrocket, organizations are struggling to meet growing storage requirements. Instead of constantly adding local storage capacity, more and more organizations are turning to cloud-based storage as a cost-effective alternative.
There are two basic cloud storage options that can be implemented by both small-to-midsize businesses and large enterprises – Storage-as-a-Service (SaaS) and Backup-as-a-Service (BaaS). SaaS enables organizations to store data remotely by utilizing the storage infrastructure of a cloud service provider. Similarly, BaaS allows the remote backup of data on cloud-based servers owned by a service provider. In both models, data is accessed via the Internet using an encrypted connection.
Cloud storage can benefit organizations of all sizes in a number of ways:
- Capital Preservation. Instead of purchasing and allocating space for hardware, you use the enterprise-grade infrastructure of the provider who is responsible for keeping the environment up-to-date.
- Operational Efficiency. With cloud storage, you don’t have to maintain offsite backups, manage and support the storage infrastructure, or power and cool the hardware.
- Simplicity. Cloud storage reduces your data center footprint and the complexity associated with enterprise storage equipment.
- Scalability. Predicting how much storage capacity you’ll need for the next year and beyond can be very difficult. Many organizations overspend or fall short. With cloud storage, you pay as you go, adding or reducing capacity based upon current needs.
- Mobility. Bring-your-own-device policies and mobile workforces make anytime, anywhere access to data a business imperative. Cloud storage enables users to access data from any desktop or mobile device with an Internet connection. This improves productivity, flexibility, collaboration and customer service.
- Security. Service providers typically have more robust security systems in place and highly qualified IT personnel to manage those systems. Data is encrypted, backed up and secured on multiple servers, which speeds disaster recovery and minimizes the risk of equipment failure and security breaches.
Before moving storage to the cloud, you need to assess the readiness of your IT infrastructure. In other words, you can’t just rip out your storage equipment, flip a switch, and start using cloud storage. You need to make sure your infrastructure can support cloud applications and provide reliable Internet connections so you can take full advantage of the cloud without compromising performance or reliability. You may want to keep certain business-critical data onsite instead of turning over control to the provider.
You also need to assess the capabilities of your service provider. Ask where your data is physically located, who can access your data, and how long it will take to access your data. If your organization is subject to industry regulations, your provider should show you how compliance is maintained. Finally, make sure answers to all of these questions and the responsibilities of all parties are clearly defined in your service level agreement, which should be reviewed by an attorney.
Cloud storage certainly has its advantages, but only when deployed strategically with careful, meticulous planning. Let Sigma Solutions help you determine if cloud storage makes sense for your organization and what infrastructure upgrades may be necessary for implementation.
The process of developing, testing, deploying and changing applications in-house is typically complicated and inefficient from an IT infrastructure perspective. Each application needs hardware, an operating system, middleware, servers and an assortment of software, along with a dedicated IT team to manage that infrastructure. In addition to being expensive to power and cool, this type of environment is difficult to scale and provides little agility to quickly adapt to changing business requirements.
Platform-as-a-Service (PaaS) is a cloud-based delivery model that enables organizations to consume application infrastructure and services as a monthly operational cost. Instead of hosting the application development platform, the platform is delivered by a cloud service provider who is responsible for managing, updating and securing the infrastructure, and provisioning the servers, storage and backup needed when deploying an application. The provider may also assist with the development, testing and deployment of software.
PaaS is similar to middleware, a software layer of tools for application developers. However, middleware must be configured and managed. PaaS makes it possible for developers to focus on creating applications without worrying about the backend infrastructure. In other words, PaaS offers middleware services while shifting the operational burden to the service provider.
PaaS allows organizations to reduce capital and operational expenses, simplify their IT infrastructure and accelerate the process of launching new applications by as much as 50 percent. This allows more resources to be devoted to the development of custom applications that create competitive advantages and drive revenue. These are the key factors driving the increased adoption of PaaS as organizations seek to operate with more efficiency, speed, flexibility and agility. In fact, Gartner predicts that all organizations will leverage public or private PaaS solutions for at least a portion of their business software by 2016.
Different vendors offer different PaaS services and features. Some even have slightly different definitions of the PaaS model. Like any IT solution, there is no one-size-fits-all approach, so your solution should be customized to suit your specific business needs. Make sure the solution you choose is easily scalable and capable of supporting enterprise-grade applications, and make sure your provider can keep your data secure and maintain regulatory compliance. It’s also helpful to use a non-proprietary, interoperable PaaS solution in order to avoid vendor lock-in and allow for portability across clouds.
Sigma Solutions has partnered with industry-leading cloud providers to deliver best-of-breed PaaS solutions. Let Sigma help you determine how your organization might benefit from PaaS, assess your existing infrastructure, and customize a solution that helps you operate more efficiently and effectively.
Desktop-as-a-Service: the Cloud Approach to VDI - 2014.04.18
Most organizations that use virtual desktops are hosting them onsite in their data centers. However, as cloud-based services and mobility continue to grow, Desktop-as-a-Service (DaaS) is becoming an increasingly popular delivery model. With DaaS, a cloud service provider hosts the virtual desktop infrastructure (VDI).
DaaS and VDI both streamline desktop management and allow for greater flexibility and mobility. They also make it possible to shift from PCs to low-cost thin clients or zero clients in order to reduce hardware costs. The most obvious difference between DaaS and VDI, however, is that DaaS is hosted in the cloud and VDI is hosted in-house. Essentially, DaaS enables organizations to outsource VDI.
With DaaS, organizations pay a monthly subscription fee to a service provider and avoid any capital expenses that are required to implement and host VDI onsite. While long-term costs of DaaS and VDI are likely comparable, VDI requires a robust backend infrastructure that can be complex to implement and operate. This makes DaaS more economically feasible for many organizations. On the other hand, DaaS customers must have ample bandwidth and reliable Internet connectivity to ensure optimal performance and minimize latency, two common sources of frustration when using cloud-based services.
DaaS shifts responsibility for maintenance and costs related to storage, backup, security and upgrades to the service provider. This reduces network complexity and removes much of the day-to-day management tasks from your IT department, although IT must manage its virtual desktop applications and monitor remote desktop protocols. With VDI, all management, maintenance and provisioning is handled in-house. While this requires more IT resources, it also gives IT more control over data security and performance.
DaaS is flexible, as cloud-hosted desktops can be quickly deployed on virtually any device, and you can scale services up or down according to current business needs. Licensing is an issue with DaaS, however; Microsoft has yet to offer a Windows 7 licensing agreement for service providers, although there are alternatives to Windows 7. VDI licensing isn’t much better, with Software Assurance and a variety of other licenses required.
Organizations will obviously benefit from the lower upfront costs, simplified infrastructure and streamlined management with DaaS, but IT generally prefers to maintain direct control over security and sensitive data. As a result, many enterprises are choosing a hybrid approach to desktop virtualization, leveraging both onsite VDI and cloud-based DaaS. It’s simply a matter of determining which approach makes the most sense for specific groups of users within the organization.
Before moving to a DaaS model, make sure your service provider offers adequate security, connectivity, reliability and support, and provides compensation for outages in your service level agreement. Keep in mind that you can conduct pilot programs for DaaS, so take advantage of this capability in order to test the effectiveness of your DaaS solution and determine if it is the right approach