Blog

The views expressed in the posts and comments of this blog do not necessarily reflect those of Sigma Solutions. They should be understood as the personal opinions of the author. No information on this blog will be understood as official.


  • Most organizations that use virtual desktops are hosting them onsite in their data centers. However, as cloud-based services and mobility continue to grow, Desktop-as-a-Service (DaaS) is becoming an increasingly popular delivery model. With DaaS, a cloud service provider hosts the virtual desktop infrastructure (VDI).

    DaaS and VDI both streamline desktop management and allow for greater flexibility and mobility. They also make it possible to shift from PCs to low-cost thin clients or zero clients in order to reduce hardware costs. The most obvious difference between DaaS and VDI, however, is that DaaS is hosted in the cloud and VDI is hosted in-house. Essentially, DaaS enables organizations to outsource VDI.

    With DaaS, organizations pay a monthly subscription fee to a service provider and avoid any capital expenses that are required to implement and host VDI onsite. While long-term costs of DaaS and VDI are likely comparable, VDI requires a robust backend infrastructure that can be complex to implement and operate. This makes DaaS more economically feasible for many organizations. On the other hand, DaaS customers must have ample bandwidth and reliable Internet connectivity to ensure optimal performance and minimize latency, two common sources of frustration when using cloud-based services.

    DaaS shifts responsibility for maintenance and costs related to storage, backup, security and upgrades to the service provider. This reduces network complexity and removes much of the day-to-day management tasks from your IT department, although IT must manage its virtual desktop applications and monitor remote desktop protocols. With VDI, all management, maintenance and provisioning is handled in-house. While this requires more IT resources, it also gives IT more control over data security and performance.

    DaaS is flexible, as cloud-hosted desktops can be quickly deployed on virtually any device, and you can scale services up or down according to current business needs. Licensing is an issue with DaaS, however; Microsoft has yet to offer a Windows 7 licensing agreement for service providers, although there are alternatives to Windows 7. VDI licensing isn’t much better, with Software Assurance and a variety of other licenses required.

    Organizations will obviously benefit from the lower upfront costs, simplified infrastructure and streamlined management with DaaS, but IT generally prefers to maintain direct control over security and sensitive data. As a result, many enterprises are choosing a hybrid approach to desktop virtualization, leveraging both onsite VDI and cloud-based DaaS.  It’s simply a matter of determining which approach makes the most sense for specific groups of users within the organization.

    Before moving to a DaaS model, make sure your service provider offers adequate security, connectivity, reliability and support, and provides compensation for outages in your service level agreement. Keep in mind that you can conduct pilot programs for DaaS, so take advantage of this capability in order to test the effectiveness of your DaaS solution and determine if it is the right approach

  • According to a TechRepublic survey, 45 percent of organizations are using virtual desktop infrastructure (VDI) for end-user computing. Research from Gartner predicts virtual desktops will expand to 70 million units in 2015 – a jump from a 2010 estimate of 40 million units in 2013 – and account for 40 percent of the market. VDI deployments are growing, but not as quickly as many have expected.

    While industry experts have been saying “this is the year for VDI” for the past few years, there are clear challenges with VDI implementation that have stopped that prediction from becoming reality. Compared to traditional desktop deployments, implementation of VDI requires significant expertise and can be very complex. Poorly planned and executed VDI deployments have led to poor user experiences and prevented organizations from taking advantage of the flexibility and simplified management that VDI is capable of delivering.

    In many instances, organizations must increase network and storage capacity in order to support VDI. Software licensing policies are still evolving and can be costly and complicated. From a strategic standpoint, organizations need to determine which employees will actually benefit from VDI and choose a scalable architecture that allows for simple VDI expansion. Simply put, implementation complexity has curbed some of the enthusiasm surrounding VDI.

    Converged infrastructure solutions promise to change that dynamic by simplifying VDI implementation. With a converged infrastructure, the entire IT environment – compute, networking, storage and virtualization resources – are delivered in one preconfigured, pretested solution.

    Converged infrastructure speeds VDI deployment by dramatically reducing complexity and streamlining the design of the data center architecture. Business applications are tested and benchmarked in advance to ensure high performance levels in mixed workloads, which minimizes risk and leads to more predictable, reliable results. Organizations can start small and scale the environment up or out by adding converged infrastructure components while maintaining the consistent performance needed for an optimized user experience,

    Today, vendors are offering converged infrastructure solutions that are designed and validated for desktop virtualization to help organizations hit the ground running and take full advantage of VDI. Many vendors are offering solutions that enable organizations to quickly respond to changing market conditions with pre-integrated platform options and template-based infrastructure and workload provisioning.

    All of these factors, along with better utilization of resources, fewer network components and fewer maintenance contracts, contribute to lower total cost of ownership when VDI is delivered via converged infrastructure. Integrated, centralized management further drives down costs and operational headaches.

    Will this be the year for VDI? We won’t start making those kinds of proclamations. But converged infrastructure may just be the game-changer that alleviates many of the concerns organizations have with VDI, leading to more widespread deployments.

  • First, the public cloud was all the rage as organizations enjoyed new levels of flexibility. As security and regulatory compliance concerns grew, the focus moved to the private cloud. Now, organizations seeking increased agility and efficiency are exploring the hybrid cloud.

    In a hybrid cloud environment, an organization seeks to maximize agility by using both public cloud services and an onsite private cloud. Instead of replacing the existing IT infrastructure, the cloud complements and enhances the corporate data center. This enables organizations to leverage the scalability and cost-efficiency of a public cloud, maintain control of mission-critical applications and data, and automatically provision resources according to current business needs.

    The number of hybrid cloud deployments remains relatively low, but they’re at the same level as private cloud deployments a few years ago, according to Gartner research. In fact, nearly half of large enterprises are expected to move to a hybrid cloud by the end of 2017.

    While a shift is underway to the hybrid cloud, the technology is still evolving and challenges remain. A hybrid cloud tends to be more complex than traditional environments, making it difficult to develop policies and ensure seamless operation between cloud services and in-house architecture. Compatibility issues can lead to frustrating, productivity-draining performance issues.

    According to a study conducted by Forrester Research last year, ensuring the performance of applications and maintaining visibility and control of workloads across public and private cloud services were significant challenges associated with the hybrid cloud. IT must effectively manage configuration, security, and the detection and resolution of network issues while minimizing impact to the production environment.

    There are cultural forces at work as well. Gartner suggests the largest obstacle to more widespread hybrid cloud deployments is resistance to the transformational adjustments necessary to make it work. IT must break away from the traditional IT culture, embrace a model centered on automation and self-service, and focus on solving strategic business process problems rather than technical issues.

    One way to overcome the challenges involved with hybrid cloud deployments is to partner with a managed services provider. A managed services provider can develop a strategy based upon your organization’s business processes and goals to ensure a cohesive hybrid cloud environment. And because cloud-based services, application and data are critical to your operations, around-the-clock network monitoring, support and mobile device management are necessary to maintain the highest levels of security and performance. Turning over these responsibilities to a managed services provider reduces costs and enables in-house IT resources to focus more on strategic initiatives and take full advantage of the agility made possible by a hybrid cloud.

    Technology is now viewed more as a driver of revenue and creator of competitive advantage than a collection of tools. As a result, organizations must reevaluate their approach to how technology is managed and integrated with business processes. By leveraging managed services in a hybrid cloud environment, IT can streamline operations while delivering the flexibility, scalability, reliability and performance the business demands.

     

  • Keeping documents up-to-date and ensuring that colleagues have the right version has always been difficult. The problem has only grown worse with the distributed nature of today’s enterprise and the increasing use of mobile devices. Cloud-based file-sharing services such as Dropbox, EverNote and YouSendIt have emerged to provide a simple (and, sometimes, free) solution.

    With cloud-based file-sharing, users can access documents anytime, anywhere from any Internet-connected device. It enables employees to easily share documents with individuals outside the company firewall, and is particularly useful for files that are frequently updated or too large to email.

    But organizations are justifiably concerned about the security threats associated with cloud-based file-sharing, including data loss, theft or regulatory compliance violations. According to the “Content in the Cloud” report by the Association for Information and Image Management, 45 percent of companies have official policies regulating the use of “consumer-grade” file-sharing and collaboration systems. Although few organizations ban them outright, IBM made news a couple of years ago when it prohibited its 400,000 employees from using these systems as well as other public cloud services.

    Not every organization has the same needs and requirements as IBM. But any business that stores sensitive information should be aware of the very real risks associated with cloud-based file-sharing. Security is not the only issue — organizations should also be concerned about losing control over valuable information assets. Consider these results from a recent survey conducted by Harris Interactive:

    • 51 percent of employees think that cloud-based file-sharing is secure.
    • 38 percent have transferred sensitive files via an unapproved file-sharing service to someone else at least once; 10 percent have done it six or more times.
    • 46 percent say that it would be easy to take sensitive business documents to another employer.
    • 27 percent of users of cloud-based file-share services report still having access to documents from a previous employer.

    Simply banning the use of cloud-based file-sharing isn’t the answer. Employees need to easily access and share files and will adopt tools that allow them to do that — with or without the approval of IT.

    The best way for organizations to circumvent the use of cloud-based file-sharing is to provide employees with an alternative. An enterprise-class file-sharing solution can provide the same convenience and flexibility as consumer-grade options while ensuring that IT retains the necessary control. Here are three of the many options available:

    • Citrix ShareFile offers best-in-class capabilities to users such as secure file sharing on any device, robust sync tools to manage data on multiple devices, and seamless Microsoft Outlook integration, while extending enterprise-grade security and control capabilities to organizations.
    • VMware Horizon Workspace provides a single workspace for desktops, applications and data as well as secure internal and external file-sharing.
    • Syncplicity by EMC enables one-click file-sharing and distribution of files to mobile users, and provides real-time document backup and continuous availability.

    When choosing an enterprise-class file-sharing solution, there are a number of things to consider:

    • Employee work styles and company culture. The file-sharing solution should enhance collaboration, streamline processes, and extend to customers, business partners and other third parties as appropriate.
    • Existing content and collaboration systems. The file-sharing solution may need to integrate with these tools to ensure smooth workflows as well as security, privacy and regulatory compliance.
    • Deployment, administration and management. Because file-sharing solutions are deployed to most if not all employees, administration, management and support need to be as efficient as possible.

    Most importantly, the solution needs to be as simple and intuitive as consumer-oriented products to ensure broad adoption among users. Sigma can help you evaluate and deploy an enterprise-class file-sharing solution that balances ease-of-use with security and regulatory compliance requirements.

  • While cloud computing has delivered tremendous business value to organizations of all sizes, recent data specific to midmarket companies illustrates the benefits of steady adoption based upon strategic planning. According to a recent Deloitte study, the cloud has made enterprise-class technology more accessible, economically feasible and less risky for midmarket organizations.

    56 percent of midmarket IT executives are using cloud-based services, and 53 percent say the cloud makes their companies significantly more competitive. These companies have leveraged the cloud to increase productivity, reach new customers and strengthen the company culture. Because business applications, data and services are available from anywhere on virtually any device, employees can better understand and more quickly respond to the needs of clients and prospects.

    Another survey from Evolve IP revealed that nearly nine out of 10 midmarket IT professionals believe cloud computing is the “future model for IT.” These companies are using an average of 2.5 cloud-based services, and 75 percent of respondents plan to move more services to the cloud within the next three years.

    70 percent say the cloud has led to greater flexibility and scalability, as the cloud supports an increasingly remote workforce that is no longer tied to an office or computer. It also enables IT to easily add new applications, services and users without purchasing new hardware, creating a more scalable infrastructure. Additionally, 60 percent of cloud users report improvements in disaster avoidance and business continuity, thanks to offsite data backup that enables users to access applications and data with minimal or no disruption to business operations.

    Although the cloud is delivering on its promise to improve productivity, customer service, flexibility, scalability and disaster preparedness, some midmarket executives are struggling to balance these benefits with the risks of cloud computing. These risks include security, trusting a third party to sensitive data, application performance and reliability, and regulatory compliance. In fact, nearly 40 percent of midmarket IT executives haven’t deployed cloud-based services due to concerns about data privacy and security, according to the Deloitte survey.

    Before implementing a cloud computing solution, there are certain factors that should be considered in order to maximize the benefits and minimize risk:

    • Identify how the cloud will support and improve upon your organizations’ processes and goals.
    • Determine which specific departments and job functions would benefit the most from cloud computing.
    • Assess your existing IT infrastructure to determine how complicated a cloud deployment might be.
    • Determine which applications can best take advantage of cloud services — for example, those that require intermittent bursting such as e-commerce or marketing campaigns.
    • Evaluate whether the cloud might provide the resources and scale to support a modern data center architecture such as Hadoop or MongoDB.
    • Think about how quickly your organization is changing and expanding, and how the cloud can facilitate this evolution.

    Most organizations don’t have the in-house expertise or resources to adequately address each of these considerations. This can lead to cost overruns, delays and less-than-optimal performance. Let the Sigma Professional Services team conduct a cloud-readiness assessment, develop a strategic deployment plan, and leverage partnerships with respected cloud providers such as Rackspace and SunGard to help you take full advantage of cloud computing.

    Posted in: cloud
  • Despite the efficiencies gained through technological advances and hardware consolidation in recent years, research from IDC shows that the old 80-20 rule still applies to most IT departments: 76.8 percent of time and resources are devoted to maintaining the environment, while the remaining 23.2 percent are spent on strategic initiatives that deliver actual business value.

    How is this possible? Various components of the IT infrastructure are still managed in technological and organizational silos. Silos drive up costs because provisioning, deploying and updating new solutions require more time and personnel. Added layers of complexity and compatibility issues make the environment less flexible and more difficult to operate and scale. Virtual server sprawl creates performance issues and administrative headaches, which often lead to unnecessary upgrades and overprovisioning.

    Converged infrastructure simplifies the IT environment by delivering compute, networking, storage access and virtualization resources in one preconfigured, pretested solution. The reduced complexity of a converged infrastructure that shares the same pool of resources brings a number of benefits:

    • Simplified, central management and maintenance. Administrators control a converged infrastructure through a single management console. Training requirements are reduced, and IT has a single point of contact for support, even if the solution includes components from more than one vendor.
    • Lower costs. More efficient cabling, lower power and cooling requirements, fewer maintenance contracts, higher resource utilization, and a smaller footprint with fewer moving parts make a converged infrastructure less expensive to operate.
    • Faster deployments. A converged infrastructure is typically up and running in days as opposed to months. Manual configurations and errors that commonly cause delays are replaced with a fully automated, orchestrated solution.
    • Less risk. Because a converged infrastructure is preconfigured and pretested for various workloads, there is a much lower risk compared to building an IT infrastructure from the ground up. Performance is much more predictable.
    • Easier scalability. Provisioning equipment, applications and services is simpler and faster, and changes can be made seamlessly without disrupting the rest of the IT environment.
    • Improved business agility. Because time-draining silos are eliminated and orchestration software makes it easy to add new solutions, IT can more quickly adapt to evolving business priorities and market conditions.

    It’s not uncommon for organizations to take a best-of-breed approach to IT, choosing the best hardware, software and services from various vendors with the goal of creating an IT all-star team. While this approach will arm organizations with world-class players, managing those players and getting them to play nicely together is a major challenge. With a converged infrastructure, someone else has already assembled a cohesive team for you – a team that’s ready to hit the field from day one.

    To determine the best path forward, you need to understand what types of changes converged infrastructure will bring to your organization – technically, operationally and culturally. How will it affect your existing IT environments? How will roles change? Will it be difficult to get your team to embrace these changes? Is your current infrastructure aligned with your business processes and goals, or is it time for a change? Sigma Solutions can help you answer these questions so you can take advantage of a more efficient, easy-to-manage IT environment.

  • According to a Frost & Sullivan survey of midmarket companies, keeping up with new technology is the biggest IT challenge organizations are facing today. This is an obstacle that goes beyond staying abreast of the latest innovations. Organizations of all sizes are struggling to choose and implement the kinds of IT solutions that create true competitive advantage.

    That’s because many IT organizations are stuck in the old 80-20 rut. As much as 80 percent of IT resources continue to be dedicated to managing and maintaining existing and often outdated technology, while only 20 percent are spent on strategic initiatives that boost productivity and revenue. Instead of spurring growth, technology is causing many organizations to remain stagnant.

    More and more organizations are adopting a managed services model, in which an IT service provider remotely manages the organization’s IT processes. Managed services include network monitoring, data backup management, server maintenance, security and patch management, tech support and other services.

    A managed services provider (MSP) can help organizations optimize their IT environments. Benefits of this model include:

    • More efficient operations. An MSP can help you streamline your IT processes, dramatically reduce maintenance and support costs, and remove layers of complexity in order to deliver critical services more quickly and effectively.
    • Better use of in-house IT resources. Most organizations have limited IT budgets and personnel. Utilizing managed services allows these organizations to dedicate IT resources to strategic growth initiatives and outsource time-consuming, day-to-day tasks.
    • Greater predictability. MSPs use automated tools that minimize human error. At the same time, best-in-class MSPs employ best practices that ensure critical maintenance is performed regularly.
    • Improved network resilience. Managed services can help detect and remediate problems before they cause downtime. Sophisticated security tools are closely monitored to thwart cyber attacks, while remote backup and disaster recovery solutions can ensure access to mission-critical applications and services should disaster strike.
    • Fewer compliance headaches. Industry regulations are constantly evolving and many requirements are becoming more stringent. A recent study by Six Degrees Group found that more than half of IT professionals would prefer to outsource data compliance to an MSP. An MSP can help ensure regulatory compliance by monitoring for events that could result in downtime or data loss.
    • Faster implementation of new tools and services. An MSP typically has a level of expertise that few in-house IT departments can match, as well as the resources to evaluate the latest IT solutions. The MSP can serve as a “virtual CIO,” helping you to deploy new solutions quickly and ensure they’re aligned with your organization’s business processes and goals.
    • Data for improved budgeting and decision-making. Based upon data gathered by remote monitoring and reporting tools, organizations can fine-tune their budgets and be prepared to ramp up or scale back services as business needs evolve.

    To take full advantage of the benefits of managed services, IT should determine which tasks can be automated using remote tools. These tasks are ideal candidates for outsourcing to an MSP. Look for a provider who has extensive experience and offers a blend of managed and professional services that can be strategically combined to meet your specific business needs.

    Sigma OneSource is an enterprise-class managed services solution based upon industry best practices and our proven methodology. Let Sigma help you assess the state of your IT operations and determine how a managed services model can deliver the most value to your organization.

  • How relevant is the desktop computer to your personal computing experience?

    For many of us, the PC remains a primary computing tool even as we add more mobile devices to our arsenals. That stems in many respects from the traditional end-user computing architecture, in which client devices must be capable of running rich applications and connecting to the network for data and services. That architecture has not translated well to the mobile computing model due to the memory, processing and power limitations of the device.

    The cloud changes the mobile computing paradigm by shifting data processing and storage off of the device. With the cloud you gain the ability to handle Big Data, Web 2.0 technologies and other rich applications on a mobile platform — what some experts are calling the Third Platform.

    International Data Corp. analysts coined the term “Third Platform” to describe the next major era in computing, after the mainframe and PC platforms that preceded it. The Third Platform represents a symbiosis of mobile computing, cloud services, Big Data analytics and social networking, and IDC describes it as “the industry’s emerging platform for growth and innovation.”

    The IEEE Computer Society refers to the trend as the “Mobile Cloud” while Gartner analysts prefer the term “Nexus of Forces.” But all seem to agree that this new model will dominate the technology landscape in the foreseeable future and drive technology investments in 2014 and beyond.

    The mobile device serves as the foundation of the Third Platform as smartphones and tablets continue to displace traditional PCs and laptops. The cloud forms the next layer, with spending on cloud technologies and services expected to increase 25 percent in 2014. As these forces combine, workers are no longer anchored to the PC as their primary computing hub, and organizations are able to streamline workflows through next-generation applications and services delivered to mobile devices.

    The increasingly complex needs of mobile users are expected to drive greater reliance on data center infrastructure, and IT must be prepared to support those needs. Some of the trends we are seeing:

    • Social networking. Social technologies are driving not only customer engagement and marketing strategies but product and service development processes. As a result, organizations are starting to integrate social tools with business applications and incorporate the identity management systems of social networks into their user authentication processes.
    • The personal cloud. Personal cloud solutions enable IT to provide seamless, secure access to business apps and data to any device, including PCs, with identity-based provisioning and policy-based control.
    • Big Data. Organizations are struggling to make sense of the huge volumes of unstructured data they generate. The cloud provides the scalable infrastructure needed to transform Big Data into actionable insights that users can tap on demand.

    Perhaps more than anything else, the Third Platform will shift IT’s focus from devices to services. Although devices are still needed to access applications and data, the specifics of devices will become less relevant.

    What does all this mean to your business? The Third Platform is expected to become a disruptive force in almost every industry. That’s why Sigma has developed a comprehensive suite of services and solutions focused on end-user computing and the cloud. Sigma is here to help you embrace the Third Platform and develop a strategy that will help create competitive advantage in 2014 and beyond.

  • One of the strongest areas of growth in the IT industry is integrated infrastructure — International Data Corp (IDC) reports that integrated infrastructure sales rose 80.3 percent year over year during the second quarter of 2013. IDC defines integrated infrastructure as “pre-integrated, vendor-certified systems containing server hardware, disk storage systems, networking equipment, and basic element/systems management software.” Missing from that definition is the tremendous value these solutions bring to customers.

    Traditionally, IT environments have been assembled by purchasing, sizing, configuring and integrating servers, storage and network gear. The whole process takes weeks or months, and involves multiple IT teams with expertise in the various components. As business demand for IT services has accelerated, IT organizations are finding that they are unable to keep up using this build-as-you-grow model.

    Virtualization has further altered the dynamic. IT can roll out of servers and applications in a matter of days but the rest of the infrastructure often lags behind, even with automated provisioning. One IT manager put it this way: “We tried organizationally to automate these things, but if you have five different processes you still have five handoffs.”

    Integrated infrastructure solutions relieve these bottlenecks by bringing together a technology stack in which all components have been configured and sized to support various workloads. What’s more, the technology stacks have been tested to ensure interoperability and certified to deliver optimal performance, giving IT the confidence to roll out new services in a hours rather than days.

    Integrated infrastructure solutions are increasingly popular among organizations seeking to deploy new applications rapidly and reduce capital and operational expenses by maximizing utilization, simplifying management and minimizing downtime. A survey conducted by Zenoss conducted during the first quarter of 2013 found that 30 percent of respondents are using integrated infrastructure, and more than half are either considering or planning to adopt it.

    Last year, Enterprise Strategy Group asked IT managers which infrastructure deployment model they were using and which model they would prefer to use. Nearly half (46 percent) said they were using the do-it-yourself model, buying individual components and building the environment from the ground up. However, 36 percent said that they would prefer to use an integrated infrastructure solution, compared to just 28 percent who would prefer the do-it-yourself model. If that survey were taken today, it is likely that a greater percentage of respondents would prefer integrated infrastructure solutions.

    Integrated infrastructure solutions deliver key advantages:

    • Faster, more efficient deployment. IT departments don’t have to devote time and resources to procure and configure of each data center component.
    • Improved performance. Integrated infrastructure solutions are pretested to optimize performance and ensure the seamless sharing of intelligence between hardware components.
    • Simplified management. Management is centralized, policy-driven and automated to reduce the burden on IT.
    • Better service and support. The entire technology stack is certified by all vendors, who guarantee to work together to support the customer. There is no finger-pointing if something goes wrong.
    • Lower total cost of ownership (TCO). All of the above-mentioned advantages help reduce capital and operational costs. Organizations have reported a full return on investment in just eight months.

    These key advantages are driving the fast pace of adoption of integrated infrastructure solutions, especially among those organizations that are increasing their use of virtualized and cloud-based services. IT executives view integrated infrastructure as the foundation for next-generation data centers and a key enabler of new business initiatives.

    The right partner is critical to success, helping to ease the transition, assist in the certification process and enable collaboration among the vendor, customer and deployment team. Sigma delivers a suite of services across the full lifecycle of the converged stack — consulting, implementation and operational support. Sigma has the right mix of experience, expertise and vendor relationships to help you gain maximum value from integrated infrastructure solutions.

  • Corporate America has reached a critical stage in the shift to digital technology. Enterprises are quickly moving to cloud-based networks, embracing mobile platforms and mining big data with an eye toward increased efficiency, flexibility, productivity and customer satisfaction.

    Unfortunately, the typical enterprise is realizing just 43 percent of this technology’s business potential, according to a global survey of CIOs by Gartner, Inc.’s Executive Programs. One major obstacle is the supply of skilled IT labor, which hasn’t kept up with technological innovation and has many companies scrambling to manage, monitor and maintain their IT infrastructure.

    If you don’t have people with the right IT skill sets, any competitive advantage gained by this technology is lost. To avoid falling behind and potentially crippling your business, you can utilize managed services to outsource IT operations, bring people on site with supplemental staff augmentation, or hire new employees. Each option has benefits depending upon your requirements.

    The Case for Managed Services

    • You can focus on your core business. IT may be critical to your company’s growth, but IT operational tasks are probably not among the core business activities that made your company successful. Managed services enable you to out-task IT operational functions so you can focus on strategy and new initiatives.
    • You can control costs and operate more efficiently. With managed services, you pay a predictable monthly budget for the work performed and the results produced. You gain significant cost savings through greater productivity, increased uptime, more efficient operations and reduced personnel costs.
    • You can maintain flexibility and adaptability. Managed services allow you to ramp up and scale back your IT operations as needed to meet changing business requirements.
    • You can avoid the recruiting and training process — and gain 24×7 operations without staffing multiple shifts. Forget about job listings, interviews, job training or learning curves. With managed services, your outsourced IT team already has the skills needed to hit the ground running. And they’re available 24 hours a day, 365 days a year to meet mission-critical requirements.

    The Case for Staff Augmentation

    • You can maintain complete control of your resources. You can closely monitor everyone on site and make sure each individual is aligned with your business processes and goals.
    • You can leverage new and existing resources. Staff augmentation enables you to bring new skills and knowledge to your office that can be transferred to your employees to improve their job performance and productivity.
    • You can promote and enhance collaboration. Don’t underestimate the power of a close-knit team that can instantly share ideas face-to-face. This can improve morale while reducing confusion and corrections.

    The Case for Working with a Solution Provider that Offers Both

    Managed services may be a better fit for certain operational requirements, while staff augmentation is ideal for other types of projects. Sometimes your IT needs and solutions will overlap. Wouldn’t you rather get the best of both worlds from one company?

    A solution provider that offers both managed services and staff augmentation can help you keep your IT operations and initiatives moving forward efficiently and effectively. Instead of constantly trying to fill skills gaps, you’ll be able to take full advantage of new technology to advance your business.

    This is a business decision that doesn’t require analysis of tons of data. It’s just common sense.

    What challenges do you face because of the IT skills gap, and what are you doing to overcome them?

  • Here’s a sobering statistic. Gartner analysts estimate that, through 2015, just 10 percent of IT organizations will have the operational and infrastructure agility to respond to the speed of change required by the business.

    On the bright side, that represents a significant increase in IT agility over the next two years. Less than 2 percent of IT organizations are sufficiently responsive today.

    Clearly, IT is not keeping pace with the business despite the growing use of virtualization, scale-out storage and other technologies that facilitate IT agility. The authors of the Gartner report explain that the problem is operational rather than technological. Pressured to ensure high availability and data integrity, IT has become risk averse and reluctant to change its processes and internal controls. Yet change it must in order to meet business demands.

    Obviously, change can’t be implemented willy-nilly. Gartner recommends that IT organizations review their change management processes from both a business and IT perspective in order to better balance risk aversion against business velocity. Only then can they ensure that the right people, processes and technologies are in place.

    Let’s skip the “people” component for a moment and focus on the other two. Many IT organizations continue to rely on manual or semi-automated processes that fail to capitalize on the efficiencies of today’s data center technologies. In many cases, IT also lacks the management tools needed to optimize operations — or, worse, has a growing array of management point products without an overarching operational structure or sufficient staff to watch all the little needles and dials.

    With constrained budgets, skills gaps and increasingly stringent SLAs, it’s little wonder that IT is loath to abandon what has worked in the past. Few IT organizations have the resources and expertise to support current workloads and data volumes — much less effect real operational change.

    This is where IT-Operations-as-a-Service can help. IT-Operations-as-a-Service goes beyond commodity managed services programs to help IT shops create an optimized operational environment. We’re not talking about ensuring a “green light” — we’re talking about an operational model that can scale rapidly to meet changing business requirements. And, oh by the way, it can deliver significant cost savings through improved efficiency and lower personnel costs.

    IT-Operations-as-a-Service is able to achieve these benefits through automated processes and procedures and a secure and auditable management platform that supports proactive maintenance, task management and remote support. The service provider should deliver 24×7 support coverage, problem ownership and streamlined escalation in a flexible model that meets the customer’s SLAs and business requirements.

    Of course, if process and automation were the only things necessary to optimize IT operations, many more organizations would be prepared to handle the accelerated pace of business change. The IT-Operations-as-a-Service provider should also have deep experience deploying and supporting large, heterogeneous networks and expertise spanning all IT operational functions and key enabling technologies.

    In a future post we will examine how both IT-Operations-as-a-Service and supplemental staffing can fill skills gaps and how to choose the best solution for a particular function. In the meantime, we would be interested in hearing about how your IT organization is managing the pace of business change.

    Posted in: IT Operations
  • Virtualization has transformed the data center by breaking the relationship between applications and the IT systems on which they run. However, the benefits of virtualization often are offset by increased storage complexity and expense.

    Unified storage provides a solution to this quandary by allowing organizations to consolidate and virtualize storage across protocols, environments and mixed storage platforms. Combinations of block storage (Fibre Channel or iSCSI) and file storage (NAS systems with CIFS or NFS) can be managed via a common set of features such as snapshots, thin provisioning, tiered provisioning, replication, synchronous mirroring and data migration — all from a single user interface. This shift toward a shared infrastructure enables organizations to achieve storage utilization rates of 85 percent or more, compared to the sub-50-percent rates in standalone storage silos.

    Unified storage remains an evolving technology, however. Typically, these systems leverage virtualization to create deeper integration of file- and block-based storage. New to the mix is the addition of object storage.

    In a file-based system, a data file is accessed by locating the specific address within the file system hierarchy. With object storage, a unique identifier plus the file’s metadata is used to locate the file. Because objects are retrieved using their unique identifiers, there’s no need to know a directory path or even the object’s location. This location transparency makes object storage ideal for managing and archiving large quantities of static information in the cloud.

    In fact, object storage is geared toward the cloud — it uses the HTTP protocol rather than file or block storage standards. Applications access data using open standards such as SOAP (Simple Object Access Protocol) and REST (Representational State Transfer), which are designed to look for the unique identifiers.

    Object storage is particularly well suited to unstructured data such as videos, images and sound files that don’t necessarily need hierarchical indexing. That’s why sites such as Facebook use object storage to handle massive volumes of multimedia files, and some enterprises are using it for archiving unstructured data, email and virtual machine images.

    Interest in object storage is increasing due to the explosion in unstructured data growth driven by regulatory compliance requirements and data analytics. In addition to distributed access, object storage gives you the ability to store millions of objects without running up against the restrictions associated with file-based storage systems. Object storage also uses a flat address space, reducing complexity by eliminating the need to manage logical unit numbers (LUNs). And it makes sense to build a storage infrastructure based upon the public cloud model if you’re implementing a private cloud.

    Object storage is not a replacement for file- and block-based storage. It is not well-suited to data that changes frequently, and the HTTP protocol limits throughput. The fixed attributes of file storage are needed to ensure consistency in shared-file applications, and the performance of block storage is required for high-performance OLTP applications.

    However, organizations grappling with growing volumes of unstructured data should consider adding object storage to the mix. Sigma can help you evaluate and deploy an intelligent, object-based storage solution that helps combat storage sprawl and increase efficiency.

  • By John Flores,
    VP of Marketing and Business Development
    Sigma Solutions

    IT-as-a-Service is the new nirvana, an agile IT infrastructure that enables rapid response to changing business conditions and needs. Some people refer to this agile infrastructure as the private cloud. Whatever you want to call it, it represents a transformation of the traditional data center architecture.

    Traditionally, IT infrastructure was built vertically to support individual applications. That monolithic structure made it difficult to scale the environment to meet increased storage or performance demands. The new agile IT infrastructure is built out horizontally, with applications spread across pools of virtualized compute, storage and networking resources. Because those pools can readily scale in response to changing requirements, this new architecture is much more flexible and efficient.

    There are three ways to go about building an agile environment. One option is to go out and buy best-of-breed components and construct it from the ground up. The beauty of that strategy is that it’s extremely flexible and can be finely tuned to existing infrastructure and specific business requirements. The downside is that there is a good deal of complexity and effort involved. That’s why customers call Sigma — we have proven experience helping customers build private clouds.

    At the other end of the spectrum is a converged infrastructure solution such as Vblock from VCE. Vblocks are validated “stacks” that integrate best-in-class virtualization, networking, compute, storage, security and management technologies. They offer a more streamlined approach to creating private clouds, and Sigma has the certifications and expertise to successfully integrate Vblocks into the IT environment. But while Vblocks deliver pervasive virtualization and scale, a pre-engineered, pre-integrated solution may be somewhat limiting in certain environments.

    A third option is to use a reference architecture — a tested and validated design based upon best-of-breed technologies. NetApp’s FlexPod solution, for example, is a predesigned base configuration comprising the Cisco Unified Computing System (UCS), Cisco Nexus data center switches and NetApp FAS storage. The reference architecture is modular or “pod-like,” such that the configuration of each customer’s FlexPod may vary. Nevertheless, a FlexPod unit can easily be scaled up by adding resources or scaled out by adding FlexPods. It creates an agile computing environment that can meet ever-increasing performance demands and support “big data” workloads.

    A Sigma customer recently experienced the benefits of the FlexPod approach. The customer had already implemented NetApp storage, Cisco UCS and a Nexus fabric, and opted to leverage that infrastructure to create a FlexPod. Sigma engineers helped the customer tune the configuration in order to validate the design. It enabled the customer to rapidly expand a virtual desktop initiative with the confidence that the infrastructure could support the workload.

    EMC’s VSPEX is another reference architecture. With VSPEX, customers can combine their choice of industry-leading compute, networking and virtualization technologies in a proven infrastructure validated by EMC and built on highly flexible EMC storage and backup infrastructure. As a result, VSPEX Proven Infrastructures significantly reduce the planning, sizing and configuration burdens associated with private cloud deployments.

    Of course, Sigma has been providing these types of solutions for a number of years now. Sigma’s broad and deep experience across the data center enables us to create robust yet highly flexible environments based upon best-of-breed technologies. Whatever solution best meets the customer’s needs, Sigma has the knowledge and experience to transform the IT infrastructure and achieve the nirvana of the IT-as-a-Service model.

  • By Elias Khnaser
    CTO, Sigma Solutions

    In part one, I offered a high-level overview of a suggested end-user computing strategy. Let’s break down the topics, starting with the desktop strategy.

    Desktop Strategy
    While we may be in the post-PC era, it doesn’t mean that physical desktops and laptops are going to disappear. We need to continue to fine-tune and deploy desktop management tools like Microsoft SCCM and others. On the other hand, ignoring desktop virtualization and VDI is also not acceptable anymore and continuing the rhetoric and debate about CAPEX vs. OPEX costs and the exaggerated costs of VDI is just a bunch of “malarkey” (sorry, I had to find a use for this word).

    A well-planned and designed desktop virtualization infrastructure can be very cost-effective and cheaper than a physical implementation. It is also about time to position the benefits of desktop virtualization from a business perspective, BC/DR, flexibility and more. We must look beyond how much is it going to cost and consider what we gain. Anyone can lie with numbers and you can make them look the way you want, so let’s agree to just get past the TCO of desktop virtualization — it has a place and it is an integral part of the strategy.

    MDM/MAM/MIM
    Mobile Device Management, Mobile Application Management and Mobile Information Management — they’re all new terms, all colorful terms. And so, with the mobile device explosion we need to evolve our mindset from one that has traditionally always been about controlling the device to one that governs the device. Better yet, we should govern enterprise resources on these devices. MDM will aid in enforcing device passwords, remote selective wipe of the enterprise resources on the device, encryption, reporting, etc.

    MAM is about mobile applications, sandboxing and encapsulating mobile applications so that we can apply policies against them. Without sandbox or application wrapping, it will be very difficult for enterprises to control what applications can and cannot do. This is especially apparent with native e-mail clients. Without sandboxing the e-mail client, mobile applications that get installed on the device could gain access to corporate contacts and information that otherwise would not be allowed. Native e-mail clients are also so embedded into the mobile OS that it is difficult to sandbox them. That’s why organizations such as Citrix, VMware and others now provide their own version of a sandboxed e-mail as a complimentary alternative.

    MAM can also serve as a consolidated application store for the enterprise where Windows, SaaS, mobile and other applications can be consumed. This is, again, a technology where there might be overlap between MDM vendors and enterprises such as Citrix and VMware. As you are making your technology selection, choose a MAM solution that could integrate best with your desktop strategy and technology partner selection.

    Mobile Information Management, also known as Mobile Data Management, provides essentially a Dropbox-like functionality for the enterprise. The idea here is to enforce policy-driven security that would allow or deny file syncing to certain devices in certain locations. More granularly, it would allow or disallow certain file types on certain devices, etc.

    Social Enterprise / Collaboration
    Do you really enjoy sending one-word e-mails, e-mails that say “Thank you” or “Yes”? Do you enjoy searching through thousands of e-mails to locate the conversation you were having, or to find a file attachment? If you are like me, you probably despise e-mail — I truly hate e-mail and in my consulting world, when working on a customer’s statement of work, we start versioning the SOW and send it back and forth. There has got to be an easier way. What if we had a Facebook-like enterprise where we can collaborate with colleagues? Better yet, what if this social enterprise can be linked to our MIM solution so that we can drag files and collaborate on them while they are in a centralized, secure location?

    Of course social platforms still need to mature somewhat for the enterprise and you have to be able to answer questions such as:

    • What level of use of social networking will you allow?
    • Are any social networking services more enterprise-friendly than others?
    • How are they used for work purposes? (crucial question)
    • How do you see social enterprise changing communication and collaboration behavior at your company?

    I will take one step further and say that I believe social enterprise platforms such as SocialCast and Podio and others have the potential to become the next desktop and I have blogged about them here several times.

    Wireless
    Every customer tells me they have a wireless infrastructure and while I recognize that a wireless infrastructure is part of the DNA of every enterprise, for the most part, what many dismiss or disregard is that these wireless infrastructures were not built to handle the number of devices that are or will be connecting connecting to the infrastructure. More important, however, are the types of services delivered over these wireless infrastructures that are significantly different.

    Remember, in an end-user computing strategy, you have to take into account remoting protocols like PCoIP, HDX, RDP and others. You also have to take into account the new and updated technologies that could make other services better. So, please don’t ignore the wireless infrastructure.

    We are also looking for a secure and scalable infrastructure with pervasive coverage to detect and mitigate sources of interference. A wireless infrastructure capable of location tracking will tie very nicely with your MDM tools to enable or disable certain functionality depending on your geographic location.

    Security
    There is no way you are thinking about an end-user computing strategy and BYOD in particular without taking into account security generally and network access control in particular. You should be investigating and planning to control wired and wireless access and dynamic differentiated access policies, enforcing context-based security, and providing self-service access and guest lifecycle management via agent or agentless approaches.

    Now it’s your turn. Do you agree that an end-user computing strategy is needed? And if so, how we can refine and fine-tune the strategy I laid out here? Comment away!

  • by Elias Khnaser
    CTO, Sigma Solutions

    End-user computing has expanded so much and gotten even more complex. In this two-part series, we will explore the strategies that could be used in enterprises to address all the current issues: from consumerization and BYOD, to desktop virtualization and physical desktop management.

    It used to be fairly simple and straightforward: End-users either got a desktop or a laptop and those who needed a bit more accessibility got a Blackberry for mobile email, and that was it. Sophisticated enterprises managed those desktops with Microsoft SCCM, Symantec Altiris, LANdesk or similar technologies.

    Those days are gone and the situation has radically changed, with the needs and requirements of end-users having evolved to the point that they have, on average, two or three devices — a PC and smartphone and/or tablet.

    Access to resources has also changed. We used to just load everything on the laptop, but now end-users want and need selective access to resources on their preferred device from anywhere at any time over any connection.

    That means it’s time to rethink the end-user computing strategy.

    For many years, IT treated the end-user space as a second-class citizen, with no real IT talent devoted to it or any serious planning or strategy. The attitude was to just get it done no matter how sloppy the method. Most of our time and effort was focused on the data center, the crown jewel of every IT engineer’s resume. We wanted to go through the ranks, through the help desk and get to the data center — where real computing happens.

    Well, today, enterprises are demanding that the same level of seriousness we dedicated to the data center now gets focused on the end-user computing side.

    Where do we start? Let’s begin by identifying the components of this new strategy:

    • Desktop Strategy — this means a strategy for physical and virtual desktops and applications
    • MDM/MAM/MIM — necessary to govern the mobile devices, applications and data
    • Collaboration — a modern way of collaborating between end-users that goes beyond the traditional tools to reach the social enterprise
    • Wireless Infrastructure — a robust, dynamic and scalable wireless infrastructure to support the influx of devices and services
    • Security — at the heart of any strategy is security, and end-user computing security in the age of BYOD is crucial

    Now, the challenge is the ability to weave all these technologies together and avoid overlap, as some of the vendors in question provide similar capabilities. For instance, most MDM vendors now have some sort of Dropbox-like functionality, but so do desktop virtualization vendors such as VMware and Citrix.

    Next time, we’ll break down these components and discuss the strategy in more details. In the meantime, please share with me in the comments section your feedback, especially if I have missed any high-level topics.

  • By Brad Moss
    Senior Consulting Engineer

    Companies such as Vyatta have been delivering software-defined networking (SDN) for years. It works great! The issues will come with performance hits depending on the technologies being deployed in software.

    A prime example is VPN. Any VPN solution worth the name of Concentrator has hardware chips that are purpose-built to process the encryption and decryption faster than any CPU can handle (assuming the CPU is multitasking with other threads from servers and whatnot). The real issue is connecting disparate systems together — which still requires physical cabling and will cause network hardware to be around for a long time.

    NXOS is VM running on a Nexus chassis and likewise with the 62xx fabric interconnects. They actually run three VMs: management, Web GUI cluster and the actual FI software. So I’m not sure it fits to say that the network vendors are not moving to SDN. They just have not approved off-the-shelf hardware to run it.

    In the case of ideas such as OpenFlow, a capitalistic society will not allow a completely open source product to take over the masses. Very few open source items ever make it into the mainstream. As long as people need innovation and increased computing requirements from the CPU, memory and latency between physical servers, there will have to be higher-grade silicon in the hardware rather than RadioShack “build it yourself network hardware” to forward that information.

    He who owns IP is king. Even if we find a way to make the protocol widespread there will be something for sale to support it (Red Hat).

    Yeah, I think networking is overly complicated in some areas and could be simplified to the point one could sit down and manage the entire infrastructure via a central console in a single pane of glass. UCS is a prime example. Now server setup takes three to five days. Once installed and configured, hundreds of servers can be rolled out at the click of a mouse in an easy-to-use front end. I can see networking happen the same way. Oh, and as with SIP, every vendor will have their own flavor of the “standard” that will not work with others kindly.

    So in the end IBM virtualized everything in the 1960s/70s then along came the new marketing team: “We need to get computing resources at the users’ hands and ‘decentralize.’” It is all about marketing and selling product. The era we are in now is to get personal computer closer to the data sources and get the user ultra-portable, high powered devices to run their personal computer remotely. So now everyone is winning in this deal except for the PC manufacturers. Guess they are the odd man out.

    Update:

    I have researched SDN and OpenStack a bit more since writing the first half of this post. It makes a lot of sense and takes an out-of-the-box look at networking. Network engineers are the masters of complexity (http://youtu.be/CW7lT6oUWjI). That is too true. For some this is a problem just as VoIP was for the old telecom guys. The stagnant network guy that has been in the same job for 15 to 20 years and knows every little piece of hardware in his network (master of complexity) is going to be slow to adopt the SDN architectures.

    Once networks are simplified and are essentially controller-based similar to how wireless networks have been operating for the last few years, those complexities go away. If the network guys do not adopt a new technology they will be out of a job.

    I have been working in data center and enterprise-class networks for 13+ years. It is my goal in every situation to make what I am doing today irrelevant in the future. This requires us to continue learning and adapt to the new trends and not go stale or push back on the technology.

    Interesting times are upon us in the network area. This is really the first time there has been a real effort to change network since Ethernet and IPv4 went mainstream. IPv6 was ratified in 1998 and the government missed their timeframe in the last month or so to get on the “new” addresses scheme. I have to give a shout out to all the people around me who see my potential and urge me to move into new areas. That’s how I became a UCS deployment engineer. Not just because it is a Cisco product. :-)

  • By John Flores,
    VP of Marketing and Business Development
    Sigma Solutions

    “Big data” is the one of the biggest buzzwords in the IT industry today, a term used to describe the massive amount of structured and unstructured data produced by a new generation of systems and applications. Organizations are seeking to tap this data to uncover new insight and make more-informed business decisions. In many cases, however, organizations are finding that they have to resolve big storage problems before they can even begin to consider the potential for big data.

    We’re talking about datasets so large that they transcend the ability of typical database software tools to capture, store, manage and analyze. Although the definition is necessarily subjective, most analysts use the term in reference to petabytes, exabytes or potentially even zettabytes of data.

    This clearly puts a strain on data storage infrastructures. The traditional “scale-up” storage architecture suggests that the sky is the limit. In reality, however, the overall volume of data has become so high that it exceeds the capacity of traditional storage systems. In order to accommodate big data storage volumes, organizations end up deploying tens or even hundreds of storage silos, most of which are underutilized. This storage sprawl increases capital outlays and power and cooling costs, and causes severe management headaches.

    Performance bottlenecks are another problem. Traditional storage systems just don’t have enough horsepower to complete big data operations efficiently. In order to handle all the I/O requests, organizations tend to add more spindles to the environment and reduce the amount of data stored on each disk. This again leads to a bloated yet underutilized storage infrastructure.

    Big data demands a rethinking of the storage infrastructure. One solution that’s gaining traction is EMC Isilon scale-out storage. An Isilon IQ system consists of industry-standard hardware components that function as nodes connected via an Infiniband high-speed interconnect. OneFS, a next-generation storage operating system, serves as the intelligence behind the Isilon IQ storage platform. Increasing capacity, performance and throughput is as simple as adding more nodes to the cluster — OneFS automatically redistributes data evenly across all nodes. The result is a single file system that can scale out on demand, enabling one person to manage one petabyte as easily as 100 terabytes.

    A new breed of scale-up storage solutions can provide the processing power to conquer performance bottlenecks. EMC VMAX and Hitachi Data Systems VSP are high-performance solutions that deliver the raw horsepower needed to handle large datasets.

    These solutions can be used in concert to create a robust storage environment capable of handling big data. De-duplication, tiering, archival and retention policies can also be used to streamline the big data environment.

    Of course, what’s “big data” today will rapidly become the norm as data volumes continue to skyrocket. Traditional storage subsystems will no longer be viable options. Organizations need to start preparing for that inevitable future with a new approach to storage.

    Posted in: Big Data, Storage
  • By Brian Nettles,
    VP of Operations and CIO
    Sigma Solutions

    Almost every CIO who responded to Gartner’s 2012 CIO Agenda survey late last year said that reducing operational costs and increasing IT investments were top priorities. However, many organizations struggle to contain IT operational costs, creating a vicious cycle that precludes needed investments. Gartner Research Vice President Stewart Buchanan explained it this way:

    Organizations that overspend on operational activity have little money left to invest in new projects. Without reinvestment, organizations cannot restructure and optimize their operational spending. This results in rising non-discretionary costs, which in turn result in further underinvestment, lack of competitiveness, failing client service and loss of revenue. This makes future spending even less affordable and even less avoidable.

    Part of the problem stems from a failure to include operational expenditures in project budgets or to be overly optimistic in operational cost estimates. But at a more fundamental level, many IT shops find it difficult to manage today’s complex environment — much less prepare to meet tomorrow’s operational needs.

    Staffing is an ongoing challenge. It’s tough to find skilled and certified personnel with the right cultural fit, and then keep them up-to-date with ongoing training. IT managers often find themselves running a 24×7 operation with a 9×5 staff. Worse, operational knowledge typically is held by a few key personnel, putting the organization at risk. And because of personnel constraints, many IT shops lack mature processes for change control, capacity planning and problem management.

    Management tools have largely failed to deliver promised efficiencies. Most monitoring systems spit out raw data with little actionable information. More sophisticated tools are overly complex and often wind up as shelfware. As a result, organizations lack visibility into IT performance and insight as to the true costs of IT operations.

    Fixing IT operations requires the right blend of people, process and technology, but all too often organizations look at these components discretely. Adding contractors just brings in more bodies without driving real change. Outsourcing firms may take a process-driven approach, but generally lack the flexibility needed to support a changing environment. Management tools can enable more proactive operations when implemented correctly but they increase the IT footprint and total cost of ownership. How many IT managers have lamented that they spend more time mapping support tools than the actual technology?

    Sigma has developed an IT-Operations-as-a-Service that addresses all aspects of the operational environment. We looked at the market and captured the best of the IT outsourcing model — great technical expertise and refined processes — and combined those resources with the technology needed to manage complex environments. We built a relationship-oriented solution from the ground up, with local talent, 24×7 coverage, a cloud-based operations platform and well-defined standard operating procedures, all in a flexible consumption model in which you pay for what you use.

    Almost everyone agrees that IT operations are broken in many organizations. Sigma has gone to market with an IT-Operations-as-a-Service solution designed to fix the problem once and for all. By selectively out-tasking IT operations to Sigma, organizations can begin to achieve their goals of reducing operational costs and increasing IT investments.

    Posted in: IT Operations
  • By Brian Nettles,
    VP of Operations and CIO

    Some interesting buzz came out of VMworld last week. In his keynote address, incoming VMware CEO Pat Gelsinger called today’s data center “a museum.” His point was that data center operations haven’t kept pace with the rate of change in today’s IT environment.

    Some of that has to do with technology but a lot of it involves process. Too many IT shops have too many manual processes that can’t keep up with the speed, flexibility and scale of today’s data center. Organizations are rolling out new IT services faster than ever but don’t have the resources to manage and support them properly. There needs to be greater emphasis on efficiency, automation and best practices.

    There can be a tendency to put a Band-Aid on the problem and hope it gets better on its own. If we bring in a couple of contractors or resident engineers we’ll get through this crunch, the thinking goes. But adding contractors to supplement in-house resources is not cost-effective for day-to-day operations and does not address systemic problems within the IT organization. IT needs to rethink the data center operating model for the cloud era. And that’s tough to do when you’re already stretched thin and on a tight budget.

    The fact is, the entire IT consumption model is shifting. Knowing why, how and when to consume a given product or service is half the battle. Using a combination of Remote Infrastructure Management (RIM), field services and support, and contractors can help. This hybrid, IT-Operations-as-a-Service model allows for a more cost-effective, SLA-based and business-oriented approach, enabling you to systematically out-task IT maintenance and management functions so your IT team can focus on strategic initiatives.

    Tailoring your service consumption will help you begin to transform your IT operations. A true enterprise-class IT-Operations-as-a-Service solution will feature the right skill sets on demand, remote or on-premise management, automated tools and standardized methodologies that enable scalability, rapid problem resolution and repeatable results. The right level of solution will bring together people, processes and technology. And through efficiency and economies of scale, IT-Operations-as-a-Service can dramatically reduce your operational costs, leaving more of your budget for innovation.

    I’m not talking about outsourcing your IT operations. All too often, outsourcing simply transfers existing processes to a third party in a “people-based” model. With traditional outsourcing, IT loses control without really solving the problem. That’s why traditional outsourcing arrangements are unpopular and typically fail to achieve their objectives.

    Nor am I referring to traditional break/fix support agreements. Those types of agreements are important to have when things go wrong, but they simply react to IT problems without providing predictability or scalability.

    An IT-Operations-as-a-Service solution is not about system maintenance, it’s about redefining the IT operational environment and cost structure. It enables organizations to selectively outsource activities with which they don’t have capacity, competence or cost advantages. In utilizing IT-Operations-as-a-Service, the in-house IT team remains in control of the organization’s business and technology objectives while optimizing IT operations.

  • I can’t seem to stop writing about Microsoft, and as I have been touting for a while now, the company is in high innovation mode, striking on multiple fronts and it seems to be in a hurry, acquiring where it needs to, improving where it needs to and building where it must.

    Just last week I was discussing whether or not the company would build or buy a tablet and Microsoft unveils Surface. I still think they will acquire RIM or Nokia. The latter is what I am leaning towards as its current stock price is ideal, but we will see.

    On the heels of Windows Server 2012, all the new features of Hyper-V 3, new SQL server, new App-V 5, an enhanced cloud strategy with Azure now also focusing on IaaS instead of just PaaS, Microsoft finally admits that SharePoint and its social capabilities are not good enough for the enterprise and recognizes that this is an area where it desperately needs improving and  developing something from the ground up would take time, so Yammer was acquired without hesitation.

    Where will it fit? Everywhere! For starters Yammer will layer on top of SharePoint and extend its features for more social enterprise friendliness. After that, they will go after Skydrive for the enterprise and extend collaboration features to files — in the words of fellow analyst Jason Maynard, “Files are to collaboration what photos are to Facebook” and honestly I could not have summarized better.

    Microsoft has recognized that both e-mail and file sharing a la SharePoint are not good enough anymore for today’s enterprises. Yammer will bring that much-needed collaboration and breathe life into Microsoft’s products, including Office.

    But what else can Microsoft do with Yammer? Well, how about integration with Lync? That would be a perfect combination. Not only can you collaborate on files in Skydrive and SharePoint, but you can also launch meetings using Lync from within Yammer. It’s very similar to how Citrix will integrate Podio with the GoTo family and very similar to how Cisco will integrate WebEx with Quad.

    Microsoft’s move reinforces a notion I have been circulating that collaboration platforms are likely to be the next desktop, where aggregation of resources and applications happens and where collaboration is native. I think Yammer was absolutely an inevitable step for Microsoft and I applaud the acquisition. I also think we are not done seeing consolidation — Salesforce.com and possibly SAP, IBM and Oracle are due for similar social acquisitions as well.

    The Yammer acquisition clearly validates  that the enterprise is ready for social business and that desktop virtualization, collaboration, cloud data are slowly converging and crossing paths to where they are a true end-to-end enterprise consumerization strategy.

    What are your thoughts on the Yammer acquisition? Is your organization ready for the social enterprise?

    This column was originally posted on VirtualizationReview.com

    Posted in: Collaboration