The views expressed in the posts and comments of this blog do not necessarily reflect those of Sigma Solutions. They should be understood as the personal opinions of the author. No information on this blog will be understood as official.
How Can IT Cope with the Speed of Business Change? - 2013.05.06
Here’s a sobering statistic. Gartner analysts estimate that, through 2015, just 10 percent of IT organizations will have the operational and infrastructure agility to respond to the speed of change required by the business.
On the bright side, that represents a significant increase in IT agility over the next two years. Less than 2 percent of IT organizations are sufficiently responsive today.
Clearly, IT is not keeping pace with the business despite the growing use of virtualization, scale-out storage and other technologies that facilitate IT agility. The authors of the Gartner report explain that the problem is operational rather than technological. Pressured to ensure high availability and data integrity, IT has become risk averse and reluctant to change its processes and internal controls. Yet change it must in order to meet business demands.
Obviously, change can’t be implemented willy-nilly. Gartner recommends that IT organizations review their change management processes from both a business and IT perspective in order to better balance risk aversion against business velocity. Only then can they ensure that the right people, processes and technologies are in place.
Let’s skip the “people” component for a moment and focus on the other two. Many IT organizations continue to rely on manual or semi-automated processes that fail to capitalize on the efficiencies of today’s data center technologies. In many cases, IT also lacks the management tools needed to optimize operations — or, worse, has a growing array of management point products without an overarching operational structure or sufficient staff to watch all the little needles and dials.
With constrained budgets, skills gaps and increasingly stringent SLAs, it’s little wonder that IT is loath to abandon what has worked in the past. Few IT organizations have the resources and expertise to support current workloads and data volumes — much less effect real operational change.
This is where IT-Operations-as-a-Service can help. IT-Operations-as-a-Service goes beyond commodity managed services programs to help IT shops create an optimized operational environment. We’re not talking about ensuring a “green light” — we’re talking about an operational model that can scale rapidly to meet changing business requirements. And, oh by the way, it can deliver significant cost savings through improved efficiency and lower personnel costs.
IT-Operations-as-a-Service is able to achieve these benefits through automated processes and procedures and a secure and auditable management platform that supports proactive maintenance, task management and remote support. The service provider should deliver 24×7 support coverage, problem ownership and streamlined escalation in a flexible model that meets the customer’s SLAs and business requirements.
Of course, if process and automation were the only things necessary to optimize IT operations, many more organizations would be prepared to handle the accelerated pace of business change. The IT-Operations-as-a-Service provider should also have deep experience deploying and supporting large, heterogeneous networks and expertise spanning all IT operational functions and key enabling technologies.
In a future post we will examine how both IT-Operations-as-a-Service and supplemental staffing can fill skills gaps and how to choose the best solution for a particular function. In the meantime, we would be interested in hearing about how your IT organization is managing the pace of business change.Posted in: IT Operations
Object Storage: The Time Is Right - 2013.04.08
Virtualization has transformed the data center by breaking the relationship between applications and the IT systems on which they run. However, the benefits of virtualization often are offset by increased storage complexity and expense.
Unified storage provides a solution to this quandary by allowing organizations to consolidate and virtualize storage across protocols, environments and mixed storage platforms. Combinations of block storage (Fibre Channel or iSCSI) and file storage (NAS systems with CIFS or NFS) can be managed via a common set of features such as snapshots, thin provisioning, tiered provisioning, replication, synchronous mirroring and data migration — all from a single user interface. This shift toward a shared infrastructure enables organizations to achieve storage utilization rates of 85 percent or more, compared to the sub-50-percent rates in standalone storage silos.
Unified storage remains an evolving technology, however. Typically, these systems leverage virtualization to create deeper integration of file- and block-based storage. New to the mix is the addition of object storage.
In a file-based system, a data file is accessed by locating the specific address within the file system hierarchy. With object storage, a unique identifier plus the file’s metadata is used to locate the file. Because objects are retrieved using their unique identifiers, there’s no need to know a directory path or even the object’s location. This location transparency makes object storage ideal for managing and archiving large quantities of static information in the cloud.
In fact, object storage is geared toward the cloud — it uses the HTTP protocol rather than file or block storage standards. Applications access data using open standards such as SOAP (Simple Object Access Protocol) and REST (Representational State Transfer), which are designed to look for the unique identifiers.
Object storage is particularly well suited to unstructured data such as videos, images and sound files that don’t necessarily need hierarchical indexing. That’s why sites such as Facebook use object storage to handle massive volumes of multimedia files, and some enterprises are using it for archiving unstructured data, email and virtual machine images.
Interest in object storage is increasing due to the explosion in unstructured data growth driven by regulatory compliance requirements and data analytics. In addition to distributed access, object storage gives you the ability to store millions of objects without running up against the restrictions associated with file-based storage systems. Object storage also uses a flat address space, reducing complexity by eliminating the need to manage logical unit numbers (LUNs). And it makes sense to build a storage infrastructure based upon the public cloud model if you’re implementing a private cloud.
Object storage is not a replacement for file- and block-based storage. It is not well-suited to data that changes frequently, and the HTTP protocol limits throughput. The fixed attributes of file storage are needed to ensure consistency in shared-file applications, and the performance of block storage is required for high-performance OLTP applications.
However, organizations grappling with growing volumes of unstructured data should consider adding object storage to the mix. Sigma can help you evaluate and deploy an intelligent, object-based storage solution that helps combat storage sprawl and increase efficiency.
Three Paths to a More Agile Infrastructure - 2013.01.10
By John Flores,
VP of Marketing and Business Development
IT-as-a-Service is the new nirvana, an agile IT infrastructure that enables rapid response to changing business conditions and needs. Some people refer to this agile infrastructure as the private cloud. Whatever you want to call it, it represents a transformation of the traditional data center architecture.
Traditionally, IT infrastructure was built vertically to support individual applications. That monolithic structure made it difficult to scale the environment to meet increased storage or performance demands. The new agile IT infrastructure is built out horizontally, with applications spread across pools of virtualized compute, storage and networking resources. Because those pools can readily scale in response to changing requirements, this new architecture is much more flexible and efficient.
There are three ways to go about building an agile environment. One option is to go out and buy best-of-breed components and construct it from the ground up. The beauty of that strategy is that it’s extremely flexible and can be finely tuned to existing infrastructure and specific business requirements. The downside is that there is a good deal of complexity and effort involved. That’s why customers call Sigma — we have proven experience helping customers build private clouds.
At the other end of the spectrum is a converged infrastructure solution such as Vblock from VCE. Vblocks are validated “stacks” that integrate best-in-class virtualization, networking, compute, storage, security and management technologies. They offer a more streamlined approach to creating private clouds, and Sigma has the certifications and expertise to successfully integrate Vblocks into the IT environment. But while Vblocks deliver pervasive virtualization and scale, a pre-engineered, pre-integrated solution may be somewhat limiting in certain environments.
A third option is to use a reference architecture — a tested and validated design based upon best-of-breed technologies. NetApp’s FlexPod solution, for example, is a predesigned base configuration comprising the Cisco Unified Computing System (UCS), Cisco Nexus data center switches and NetApp FAS storage. The reference architecture is modular or “pod-like,” such that the configuration of each customer’s FlexPod may vary. Nevertheless, a FlexPod unit can easily be scaled up by adding resources or scaled out by adding FlexPods. It creates an agile computing environment that can meet ever-increasing performance demands and support “big data” workloads.
A Sigma customer recently experienced the benefits of the FlexPod approach. The customer had already implemented NetApp storage, Cisco UCS and a Nexus fabric, and opted to leverage that infrastructure to create a FlexPod. Sigma engineers helped the customer tune the configuration in order to validate the design. It enabled the customer to rapidly expand a virtual desktop initiative with the confidence that the infrastructure could support the workload.
EMC’s VSPEX is another reference architecture. With VSPEX, customers can combine their choice of industry-leading compute, networking and virtualization technologies in a proven infrastructure validated by EMC and built on highly flexible EMC storage and backup infrastructure. As a result, VSPEX Proven Infrastructures significantly reduce the planning, sizing and configuration burdens associated with private cloud deployments.
Of course, Sigma has been providing these types of solutions for a number of years now. Sigma’s broad and deep experience across the data center enables us to create robust yet highly flexible environments based upon best-of-breed technologies. Whatever solution best meets the customer’s needs, Sigma has the knowledge and experience to transform the IT infrastructure and achieve the nirvana of the IT-as-a-Service model.
End-User Computing Now, Part 2 - 2012.11.05
By Elias Khnaser
CTO, Sigma Solutions
In part one, I offered a high-level overview of a suggested end-user computing strategy. Let’s break down the topics, starting with the desktop strategy.
While we may be in the post-PC era, it doesn’t mean that physical desktops and laptops are going to disappear. We need to continue to fine-tune and deploy desktop management tools like Microsoft SCCM and others. On the other hand, ignoring desktop virtualization and VDI is also not acceptable anymore and continuing the rhetoric and debate about CAPEX vs. OPEX costs and the exaggerated costs of VDI is just a bunch of “malarkey” (sorry, I had to find a use for this word).
A well-planned and designed desktop virtualization infrastructure can be very cost-effective and cheaper than a physical implementation. It is also about time to position the benefits of desktop virtualization from a business perspective, BC/DR, flexibility and more. We must look beyond how much is it going to cost and consider what we gain. Anyone can lie with numbers and you can make them look the way you want, so let’s agree to just get past the TCO of desktop virtualization — it has a place and it is an integral part of the strategy.
Mobile Device Management, Mobile Application Management and Mobile Information Management — they’re all new terms, all colorful terms. And so, with the mobile device explosion we need to evolve our mindset from one that has traditionally always been about controlling the device to one that governs the device. Better yet, we should govern enterprise resources on these devices. MDM will aid in enforcing device passwords, remote selective wipe of the enterprise resources on the device, encryption, reporting, etc.
MAM is about mobile applications, sandboxing and encapsulating mobile applications so that we can apply policies against them. Without sandbox or application wrapping, it will be very difficult for enterprises to control what applications can and cannot do. This is especially apparent with native e-mail clients. Without sandboxing the e-mail client, mobile applications that get installed on the device could gain access to corporate contacts and information that otherwise would not be allowed. Native e-mail clients are also so embedded into the mobile OS that it is difficult to sandbox them. That’s why organizations such as Citrix, VMware and others now provide their own version of a sandboxed e-mail as a complimentary alternative.
MAM can also serve as a consolidated application store for the enterprise where Windows, SaaS, mobile and other applications can be consumed. This is, again, a technology where there might be overlap between MDM vendors and enterprises such as Citrix and VMware. As you are making your technology selection, choose a MAM solution that could integrate best with your desktop strategy and technology partner selection.
Mobile Information Management, also known as Mobile Data Management, provides essentially a Dropbox-like functionality for the enterprise. The idea here is to enforce policy-driven security that would allow or deny file syncing to certain devices in certain locations. More granularly, it would allow or disallow certain file types on certain devices, etc.
Social Enterprise / Collaboration
Do you really enjoy sending one-word e-mails, e-mails that say “Thank you” or “Yes”? Do you enjoy searching through thousands of e-mails to locate the conversation you were having, or to find a file attachment? If you are like me, you probably despise e-mail — I truly hate e-mail and in my consulting world, when working on a customer’s statement of work, we start versioning the SOW and send it back and forth. There has got to be an easier way. What if we had a Facebook-like enterprise where we can collaborate with colleagues? Better yet, what if this social enterprise can be linked to our MIM solution so that we can drag files and collaborate on them while they are in a centralized, secure location?
Of course social platforms still need to mature somewhat for the enterprise and you have to be able to answer questions such as:
- What level of use of social networking will you allow?
- Are any social networking services more enterprise-friendly than others?
- How are they used for work purposes? (crucial question)
- How do you see social enterprise changing communication and collaboration behavior at your company?
I will take one step further and say that I believe social enterprise platforms such as SocialCast and Podio and others have the potential to become the next desktop and I have blogged about them here several times.
Every customer tells me they have a wireless infrastructure and while I recognize that a wireless infrastructure is part of the DNA of every enterprise, for the most part, what many dismiss or disregard is that these wireless infrastructures were not built to handle the number of devices that are or will be connecting connecting to the infrastructure. More important, however, are the types of services delivered over these wireless infrastructures that are significantly different.
Remember, in an end-user computing strategy, you have to take into account remoting protocols like PCoIP, HDX, RDP and others. You also have to take into account the new and updated technologies that could make other services better. So, please don’t ignore the wireless infrastructure.
We are also looking for a secure and scalable infrastructure with pervasive coverage to detect and mitigate sources of interference. A wireless infrastructure capable of location tracking will tie very nicely with your MDM tools to enable or disable certain functionality depending on your geographic location.
There is no way you are thinking about an end-user computing strategy and BYOD in particular without taking into account security generally and network access control in particular. You should be investigating and planning to control wired and wireless access and dynamic differentiated access policies, enforcing context-based security, and providing self-service access and guest lifecycle management via agent or agentless approaches.
Now it’s your turn. Do you agree that an end-user computing strategy is needed? And if so, how we can refine and fine-tune the strategy I laid out here? Comment away!
Not Just VDI — It’s All About End-User Computing Now - 2012.10.30
by Elias Khnaser
CTO, Sigma Solutions
End-user computing has expanded so much and gotten even more complex. In this two-part series, we will explore the strategies that could be used in enterprises to address all the current issues: from consumerization and BYOD, to desktop virtualization and physical desktop management.
It used to be fairly simple and straightforward: End-users either got a desktop or a laptop and those who needed a bit more accessibility got a Blackberry for mobile email, and that was it. Sophisticated enterprises managed those desktops with Microsoft SCCM, Symantec Altiris, LANdesk or similar technologies.
Those days are gone and the situation has radically changed, with the needs and requirements of end-users having evolved to the point that they have, on average, two or three devices — a PC and smartphone and/or tablet.
Access to resources has also changed. We used to just load everything on the laptop, but now end-users want and need selective access to resources on their preferred device from anywhere at any time over any connection.
That means it’s time to rethink the end-user computing strategy.
For many years, IT treated the end-user space as a second-class citizen, with no real IT talent devoted to it or any serious planning or strategy. The attitude was to just get it done no matter how sloppy the method. Most of our time and effort was focused on the data center, the crown jewel of every IT engineer’s resume. We wanted to go through the ranks, through the help desk and get to the data center — where real computing happens.
Well, today, enterprises are demanding that the same level of seriousness we dedicated to the data center now gets focused on the end-user computing side.
Where do we start? Let’s begin by identifying the components of this new strategy:
- Desktop Strategy — this means a strategy for physical and virtual desktops and applications
- MDM/MAM/MIM — necessary to govern the mobile devices, applications and data
- Collaboration — a modern way of collaborating between end-users that goes beyond the traditional tools to reach the social enterprise
- Wireless Infrastructure — a robust, dynamic and scalable wireless infrastructure to support the influx of devices and services
- Security — at the heart of any strategy is security, and end-user computing security in the age of BYOD is crucial
Now, the challenge is the ability to weave all these technologies together and avoid overlap, as some of the vendors in question provide similar capabilities. For instance, most MDM vendors now have some sort of Dropbox-like functionality, but so do desktop virtualization vendors such as VMware and Citrix.
Next time, we’ll break down these components and discuss the strategy in more details. In the meantime, please share with me in the comments section your feedback, especially if I have missed any high-level topics.
Network Evolution and SDN/OpenStack: My Four Cents - 2012.10.26
By Brad Moss
Senior Consulting Engineer
Companies such as Vyatta have been delivering software-defined networking (SDN) for years. It works great! The issues will come with performance hits depending on the technologies being deployed in software.
A prime example is VPN. Any VPN solution worth the name of Concentrator has hardware chips that are purpose-built to process the encryption and decryption faster than any CPU can handle (assuming the CPU is multitasking with other threads from servers and whatnot). The real issue is connecting disparate systems together — which still requires physical cabling and will cause network hardware to be around for a long time.
NXOS is VM running on a Nexus chassis and likewise with the 62xx fabric interconnects. They actually run three VMs: management, Web GUI cluster and the actual FI software. So I’m not sure it fits to say that the network vendors are not moving to SDN. They just have not approved off-the-shelf hardware to run it.
In the case of ideas such as OpenFlow, a capitalistic society will not allow a completely open source product to take over the masses. Very few open source items ever make it into the mainstream. As long as people need innovation and increased computing requirements from the CPU, memory and latency between physical servers, there will have to be higher-grade silicon in the hardware rather than RadioShack “build it yourself network hardware” to forward that information.
He who owns IP is king. Even if we find a way to make the protocol widespread there will be something for sale to support it (Red Hat).
Yeah, I think networking is overly complicated in some areas and could be simplified to the point one could sit down and manage the entire infrastructure via a central console in a single pane of glass. UCS is a prime example. Now server setup takes three to five days. Once installed and configured, hundreds of servers can be rolled out at the click of a mouse in an easy-to-use front end. I can see networking happen the same way. Oh, and as with SIP, every vendor will have their own flavor of the “standard” that will not work with others kindly.
So in the end IBM virtualized everything in the 1960s/70s then along came the new marketing team: “We need to get computing resources at the users’ hands and ‘decentralize.’” It is all about marketing and selling product. The era we are in now is to get personal computer closer to the data sources and get the user ultra-portable, high powered devices to run their personal computer remotely. So now everyone is winning in this deal except for the PC manufacturers. Guess they are the odd man out.
I have researched SDN and OpenStack a bit more since writing the first half of this post. It makes a lot of sense and takes an out-of-the-box look at networking. Network engineers are the masters of complexity (http://youtu.be/CW7lT6oUWjI). That is too true. For some this is a problem just as VoIP was for the old telecom guys. The stagnant network guy that has been in the same job for 15 to 20 years and knows every little piece of hardware in his network (master of complexity) is going to be slow to adopt the SDN architectures.
Once networks are simplified and are essentially controller-based similar to how wireless networks have been operating for the last few years, those complexities go away. If the network guys do not adopt a new technology they will be out of a job.
I have been working in data center and enterprise-class networks for 13+ years. It is my goal in every situation to make what I am doing today irrelevant in the future. This requires us to continue learning and adapt to the new trends and not go stale or push back on the technology.
Interesting times are upon us in the network area. This is really the first time there has been a real effort to change network since Ethernet and IPv4 went mainstream. IPv6 was ratified in 1998 and the government missed their timeframe in the last month or so to get on the “new” addresses scheme. I have to give a shout out to all the people around me who see my potential and urge me to move into new areas. That’s how I became a UCS deployment engineer. Not just because it is a Cisco product.
Big Data, Big Storage Problems - 2012.10.22
By John Flores,
VP of Marketing and Business Development
“Big data” is the one of the biggest buzzwords in the IT industry today, a term used to describe the massive amount of structured and unstructured data produced by a new generation of systems and applications. Organizations are seeking to tap this data to uncover new insight and make more-informed business decisions. In many cases, however, organizations are finding that they have to resolve big storage problems before they can even begin to consider the potential for big data.
We’re talking about datasets so large that they transcend the ability of typical database software tools to capture, store, manage and analyze. Although the definition is necessarily subjective, most analysts use the term in reference to petabytes, exabytes or potentially even zettabytes of data.
This clearly puts a strain on data storage infrastructures. The traditional “scale-up” storage architecture suggests that the sky is the limit. In reality, however, the overall volume of data has become so high that it exceeds the capacity of traditional storage systems. In order to accommodate big data storage volumes, organizations end up deploying tens or even hundreds of storage silos, most of which are underutilized. This storage sprawl increases capital outlays and power and cooling costs, and causes severe management headaches.
Performance bottlenecks are another problem. Traditional storage systems just don’t have enough horsepower to complete big data operations efficiently. In order to handle all the I/O requests, organizations tend to add more spindles to the environment and reduce the amount of data stored on each disk. This again leads to a bloated yet underutilized storage infrastructure.
Big data demands a rethinking of the storage infrastructure. One solution that’s gaining traction is EMC Isilon scale-out storage. An Isilon IQ system consists of industry-standard hardware components that function as nodes connected via an Infiniband high-speed interconnect. OneFS, a next-generation storage operating system, serves as the intelligence behind the Isilon IQ storage platform. Increasing capacity, performance and throughput is as simple as adding more nodes to the cluster — OneFS automatically redistributes data evenly across all nodes. The result is a single file system that can scale out on demand, enabling one person to manage one petabyte as easily as 100 terabytes.
A new breed of scale-up storage solutions can provide the processing power to conquer performance bottlenecks. EMC VMAX and Hitachi Data Systems VSP are high-performance solutions that deliver the raw horsepower needed to handle large datasets.
These solutions can be used in concert to create a robust storage environment capable of handling big data. De-duplication, tiering, archival and retention policies can also be used to streamline the big data environment.
Of course, what’s “big data” today will rapidly become the norm as data volumes continue to skyrocket. Traditional storage subsystems will no longer be viable options. Organizations need to start preparing for that inevitable future with a new approach to storage.
By Brian Nettles,
VP of Operations and CIO
Almost every CIO who responded to Gartner’s 2012 CIO Agenda survey late last year said that reducing operational costs and increasing IT investments were top priorities. However, many organizations struggle to contain IT operational costs, creating a vicious cycle that precludes needed investments. Gartner Research Vice President Stewart Buchanan explained it this way:
Organizations that overspend on operational activity have little money left to invest in new projects. Without reinvestment, organizations cannot restructure and optimize their operational spending. This results in rising non-discretionary costs, which in turn result in further underinvestment, lack of competitiveness, failing client service and loss of revenue. This makes future spending even less affordable and even less avoidable.
Part of the problem stems from a failure to include operational expenditures in project budgets or to be overly optimistic in operational cost estimates. But at a more fundamental level, many IT shops find it difficult to manage today’s complex environment — much less prepare to meet tomorrow’s operational needs.
Staffing is an ongoing challenge. It’s tough to find skilled and certified personnel with the right cultural fit, and then keep them up-to-date with ongoing training. IT managers often find themselves running a 24×7 operation with a 9×5 staff. Worse, operational knowledge typically is held by a few key personnel, putting the organization at risk. And because of personnel constraints, many IT shops lack mature processes for change control, capacity planning and problem management.
Management tools have largely failed to deliver promised efficiencies. Most monitoring systems spit out raw data with little actionable information. More sophisticated tools are overly complex and often wind up as shelfware. As a result, organizations lack visibility into IT performance and insight as to the true costs of IT operations.
Fixing IT operations requires the right blend of people, process and technology, but all too often organizations look at these components discretely. Adding contractors just brings in more bodies without driving real change. Outsourcing firms may take a process-driven approach, but generally lack the flexibility needed to support a changing environment. Management tools can enable more proactive operations when implemented correctly but they increase the IT footprint and total cost of ownership. How many IT managers have lamented that they spend more time mapping support tools than the actual technology?
Sigma has developed an IT-Operations-as-a-Service that addresses all aspects of the operational environment. We looked at the market and captured the best of the IT outsourcing model — great technical expertise and refined processes — and combined those resources with the technology needed to manage complex environments. We built a relationship-oriented solution from the ground up, with local talent, 24×7 coverage, a cloud-based operations platform and well-defined standard operating procedures, all in a flexible consumption model in which you pay for what you use.
Almost everyone agrees that IT operations are broken in many organizations. Sigma has gone to market with an IT-Operations-as-a-Service solution designed to fix the problem once and for all. By selectively out-tasking IT operations to Sigma, organizations can begin to achieve their goals of reducing operational costs and increasing IT investments.Posted in: IT Operations
Optimizing IT Operations - 2012.09.04
By Brian Nettles,
VP of Operations and CIO
Some interesting buzz came out of VMworld last week. In his keynote address, incoming VMware CEO Pat Gelsinger called today’s data center “a museum.” His point was that data center operations haven’t kept pace with the rate of change in today’s IT environment.
Some of that has to do with technology but a lot of it involves process. Too many IT shops have too many manual processes that can’t keep up with the speed, flexibility and scale of today’s data center. Organizations are rolling out new IT services faster than ever but don’t have the resources to manage and support them properly. There needs to be greater emphasis on efficiency, automation and best practices.
There can be a tendency to put a Band-Aid on the problem and hope it gets better on its own. If we bring in a couple of contractors or resident engineers we’ll get through this crunch, the thinking goes. But adding contractors to supplement in-house resources is not cost-effective for day-to-day operations and does not address systemic problems within the IT organization. IT needs to rethink the data center operating model for the cloud era. And that’s tough to do when you’re already stretched thin and on a tight budget.
The fact is, the entire IT consumption model is shifting. Knowing why, how and when to consume a given product or service is half the battle. Using a combination of Remote Infrastructure Management (RIM), field services and support, and contractors can help. This hybrid, IT-Operations-as-a-Service model allows for a more cost-effective, SLA-based and business-oriented approach, enabling you to systematically out-task IT maintenance and management functions so your IT team can focus on strategic initiatives.
Tailoring your service consumption will help you begin to transform your IT operations. A true enterprise-class IT-Operations-as-a-Service solution will feature the right skill sets on demand, remote or on-premise management, automated tools and standardized methodologies that enable scalability, rapid problem resolution and repeatable results. The right level of solution will bring together people, processes and technology. And through efficiency and economies of scale, IT-Operations-as-a-Service can dramatically reduce your operational costs, leaving more of your budget for innovation.
I’m not talking about outsourcing your IT operations. All too often, outsourcing simply transfers existing processes to a third party in a “people-based” model. With traditional outsourcing, IT loses control without really solving the problem. That’s why traditional outsourcing arrangements are unpopular and typically fail to achieve their objectives.
Nor am I referring to traditional break/fix support agreements. Those types of agreements are important to have when things go wrong, but they simply react to IT problems without providing predictability or scalability.
An IT-Operations-as-a-Service solution is not about system maintenance, it’s about redefining the IT operational environment and cost structure. It enables organizations to selectively outsource activities with which they don’t have capacity, competence or cost advantages. In utilizing IT-Operations-as-a-Service, the in-house IT team remains in control of the organization’s business and technology objectives while optimizing IT operations.
Where Yammer Fits in Microsoft’s Cloud - 2012.06.27
I can’t seem to stop writing about Microsoft, and as I have been touting for a while now, the company is in high innovation mode, striking on multiple fronts and it seems to be in a hurry, acquiring where it needs to, improving where it needs to and building where it must.
Just last week I was discussing whether or not the company would build or buy a tablet and Microsoft unveils Surface. I still think they will acquire RIM or Nokia. The latter is what I am leaning towards as its current stock price is ideal, but we will see.
On the heels of Windows Server 2012, all the new features of Hyper-V 3, new SQL server, new App-V 5, an enhanced cloud strategy with Azure now also focusing on IaaS instead of just PaaS, Microsoft finally admits that SharePoint and its social capabilities are not good enough for the enterprise and recognizes that this is an area where it desperately needs improving and developing something from the ground up would take time, so Yammer was acquired without hesitation.
Where will it fit? Everywhere! For starters Yammer will layer on top of SharePoint and extend its features for more social enterprise friendliness. After that, they will go after Skydrive for the enterprise and extend collaboration features to files — in the words of fellow analyst Jason Maynard, “Files are to collaboration what photos are to Facebook” and honestly I could not have summarized better.
Microsoft has recognized that both e-mail and file sharing a la SharePoint are not good enough anymore for today’s enterprises. Yammer will bring that much-needed collaboration and breathe life into Microsoft’s products, including Office.
But what else can Microsoft do with Yammer? Well, how about integration with Lync? That would be a perfect combination. Not only can you collaborate on files in Skydrive and SharePoint, but you can also launch meetings using Lync from within Yammer. It’s very similar to how Citrix will integrate Podio with the GoTo family and very similar to how Cisco will integrate WebEx with Quad.
Microsoft’s move reinforces a notion I have been circulating that collaboration platforms are likely to be the next desktop, where aggregation of resources and applications happens and where collaboration is native. I think Yammer was absolutely an inevitable step for Microsoft and I applaud the acquisition. I also think we are not done seeing consolidation — Salesforce.com and possibly SAP, IBM and Oracle are due for similar social acquisitions as well.
The Yammer acquisition clearly validates that the enterprise is ready for social business and that desktop virtualization, collaboration, cloud data are slowly converging and crossing paths to where they are a true end-to-end enterprise consumerization strategy.
What are your thoughts on the Yammer acquisition? Is your organization ready for the social enterprise?
This column was originally posted on VirtualizationReview.comPosted in: Collaboration
Nutanix: Go Big or Go Home! - 2012.06.13
Is it arrogant for a company the size of Nutanix to take on the giants of our industry by saying FU-SAN? It is most definitely arrogant, but I most certainly like that because that arrogance hides a solid technical solution and an innovative perspective, a new way of thinking that essentially says that “monolithic” infrastructures are great but they are not the only game in town.
If we look at all the large cloud deployments, the likes of Google, Facebook and others we will quickly notice that they use commodity hardware in a grid computing type approach. That is not to say that they don’t have shared storage or SAN, but they have much more of these nodes that together form their compute fabric, except these nodes would need a layer, a file system that connects them together to yield the desired results.
Nutanix brings that type of thought process, that type of technology to the enterprise by offering a converged compute and storage cluster that is glued together by the Nutanix Distributed File System and it is this distributed file system that allows the Nutanix cluster to offer enterprise features that traditionally required shared storage like HA, DRS or vMotion (live migration, XenMotion). Nutanix is a 2U block or container which holds four nodes (hosts, servers). Each node can take up to 192GB RAM, dual socket Intel CPUs and three tiers of storage, 320GB Fusion-io flash, 300GB SATA SSD and 5TB of SATA spinning disk.
While Nutanix can be used for different types of use cases and workloads, I am particularly interested in it for desktop virtualization. One of the main barriers for desktop virtualization adoption has been cost and then, to an equal extent, complexity. Nutanix breaks down both barriers, as the cost of entry is very acceptable and the complexity is simplified.
A while back I had written an article on the cost of desktop virtualization versus physical desktops and a reader asked me how that would work for smaller organizations. Honestly at the time it was not going to be as effective as the value proposition I showed at scale. With Nutanix, however, we now have a story for the small organization as well as the large enterprise.
Now I have maintained for a while now that local disk is not the way to go for desktop virtualization and I have religiously argued against it because all of the suggested solutions that promoted local disk were suggesting that we don’t need enterprise features. My take on this is that is not acceptable. I don’t want to go backwards. I don’t want to lose features and I am not willing to compromise. I also had a lot of reservations on the configurations that were being suggested and we will get to that in a minute.
So why do I like Nutanix so much? There is nothing special with the hardware configuration: SuperMicro computers, Intel inside, Fusio-io cards, some SSD drives and some SATA drives… Big deal, right? I can put that together easily. Well, sure you can but that is where my reservations come into play. First, in that scenario we lose all enterprise features. Second, there are technical challenges with SSDs from write coalescing, to write endurance, etc., challenges you cannot overcome by putting hardware together. You need a software layer that addresses all these issues, and that’s where Nutanix Distributed File System comes into play. It enables the enterprise features, but also addresses some of the challenges I mentioned. So, do I accept local disk in this configuration? Absolutely.
For many customers, another challenge is they want to start small with VDI and grow into it. Monolithic infrastructures get cheaper at scale, which is why customers had to buy the infrastructure ahead of time to fit within certain discount ranges. With Nutanix you have to buy the first fully populated block with four nodes, but then after that you can buy a block with a single node in it and scale as you need to. Pretty elegant, if you ask me.
Couple Nutanix with Citrix VDI-in-a-box or VMware View Enterprise for small or medium size organizations and that is a killer solution. Couple it with XenDesktop or View Premier and — voila! — scale-out enterprise solution. The cost of desktop virtualization drops again. Next argument, please!
Now I can’t write this glowing column in support of Nutanix without finding something I don’t like. Today, the only hypervisor supported is VMware ESXi, and while I realize the market share is in VMware’s favor, ignoring Microsoft Hyper-V is a huge mistake. Since one of the use cases for this solution is desktop virtualization, ignoring XenServer is also not a great idea given Citrix’s position in the desktop virtualization market. Having said that, I do recognize that many enterprises deploy Citrix technologies on vSphere, but nonetheless, vSphere, Hyper-V, XenServer are an absolute must and I know that Nutanix is working on extending support to these hypervisors.
Another feature that would be welcome is Nutanix array- or block-based replication — maybe an OEM partnership with Veeam?
I am extremely interested in your opinion of Nutanix. I have several customers deploying and I welcome your feedback.
This column was originally posted on VirtualizationReview.comPosted in: Converged Infrastructure
9 Reasons Microsoft Hyper-V 3 Is Enterprise-Class - 2012.06.04
A few years ago, I wrote a controversial column listing nine reasons Hyper-V was not enterprise ready and suggesting that Microsoft had lost its innovative edge. I think the only Microsoft employee who didn’tsend hate mail was Bill Gates, and I still maintain that column cost me an MVP award.
While I stand by my previous assessment, I also maintain that the situation has changed significantly. Over the past two years, Microsoft has gone strongly into innovation mode, not only on Hyper-V but on many other product lines. The turnaround started with Exchange 2010 and its leveraging of local storage as opposed to complete reliance on expensive shared storage and extends from System Center 2012 (probably my favorite) to the new version of SQL Server, which is following in Exchange 2010′s footsteps and technology, to the new, long-awaited and overdue version of App-V, a technology Microsoft acquired many moons ago but on which it bestowed very little development effort.
But back to Hyper-V. The new version not only addresses all of my previous beefs with the product, it goes from a position of just playing catch-up with the market leader to actually giving VMware a run for its money at the feature level. This is the first release where Microsoft is putting forth a feature that VMware does not have: the ability to do live migrations of storage virtual machines.
Let’s take a look at that and eight more features that earn Hyper-V a serious look.
1. Storage Live Migration:This capability is now built into Hyper-V Manager as opposed to requiring System Center Virtual Machine Manager, as was the case with the Quick Storage Migration. Storage live migration allows IT to migrate a VM, without any downtime, from one storage system to another. Remember that innovation I was talking about? Well, traditionally with storage live migration technologies from Microsoft, VMware and others, a shared storage repository is required for this feature to work properly. In Hyper-V 3, that is not the case. While you could use shared storage, of course–and I highly recommend doing so–you can migrate a live storage VM from local disk to local disk without any downtime. Now the ball is in VMware’s court to match that functionality.
2. Concurrent Live Migrations: I have for many years criticized Hyper-V’s lack of concurrent live migrations, and I’m very happy to report that the new version finally supports this capability. For a virtualization administrator, this is invaluable functionality. We live in fast times, and we need to be able to react at the speed of the business. Quickly moving all the VMs running on a given server is a definite requirement in any virtual infrastructure, and this release delivers that.
3. Dynamic Memory:While not a new feature in Hyper-V 3 (it was available as of Hyper-V R2 SP1), it’s worth noting in a list of reasons Hyper-V 3 is ready for enterprise use. In a nutshell, dynamic memory is a memory management enhancement that allows IT to automate adding or removing memory from a VM on the fly–very helpful when trying to improve the density of VMs on a host, for example. And it’s a vital feature in any enterprise virtual infrastructure.
4. Continuous Availability:This is actually a collection of technologies in Hyper-V 3 that includes, in addition to Live Migration and Storage Live Migration, NIC Teaming and Guest Failover Clustering.
– Failover Clustering: Today, the cluster supports only 16 nodes; in Hyper-V 3, the cluster will be able to support 64 nodes and as many as 4,000 VMs.
– NIC Teaming: IT can now combine NICs from different vendors, say, Intel and Broadcom. We also have three modes for configuring NIC Teaming: switch independent, static teaming and Link Aggregation Control Protocol; LACP is huge as it extends support for demanding applications like Citrix Provisioning Services.
Finally, for Windows Server 8 or Server 2012, depending on what the name ends up being, Hyper-V 3 has a really cool feature that leverages SMB 2.2 (I am super-excited about SMB 2.2). It can leverage file shares as storage destinations. I’m sure you’re thinking “single point of failure,” but remember, you can build up to four-node active-active clustered file servers, which provide simultaneous access to file shares. Yeah, SMB 2.2 is cool; the locking mechanism is great as well–watch out, NFS.
5. Network Virtualization:Microsoft is all-in on cloud, and in order to be effective in the cloud era, you need the network stack in your virtual infrastructure to be solid. It’s worth mentioning here that Cisco supports Hyper-V on the Nexus 1000V, so the ecosystem is also coming together. In addition, Hyper-V 3 will support policy-based, software-controlled network virtualization; this is crucial in the cloud era because everything will be about policy-driven automation and orchestration, all key enablers of infrastructure-as-a-service deployments. As part of the Hyper-V 3 network virtualization capabilities, you can also create a bridge between your on-premises and cloud deployments that enables you to move your subnets into the cloud and create logic to allow them to communicate, essentially creating a hybrid cloud.
6. Storage Enhancements:No enterprise virtual infrastructure is complete without tight integration with storage, and Hyper-V 3 introduces some impressive improvements here as well. First, the new Offloaded Data Transfer is similar in functionality to VMware vSphere APIs for Array Integration, and I’m very eager to see how that improves or even solves the locking issues with CSV, which still redirects I/O through the parent partition. Virtual machines can now support up to four vHBAs with direct access to SAN LUNs using multipath I/O. You also have built-in replication, hardware snapshotting and, my personal favorite, Remote Direct Memory Access networks for SMB storage
7. Platform Enhancements:The platform has seen some major improvements as well, with support for 320 logical processors and up to 4 TB of memory per host. It’s now possible to provision virtual machines with up to 64 vCPUs and 1 TB of memory, a huge upgrade from four vCPUs and 8 GB of memory. The new VHDX file format supports up to a 16-TB virtual hard drive. These enhancements will fuel the virtualization of Tier 1 applications and are critical for an enterprise-class virtualization platform.
8. RemoteFX:This, again, is not a new feature of Hyper-V 3, but it’s very relevant to enterprise IT. Hyper-V supports GPU virtualization, which in desktop virtualization applications can be of great benefit in terms of enhancing the user experience. Essentially, you’re able to expose a virtual graphics device to a virtual machine and allow multiple virtual desktops to share a single GPU. This would enable users to run graphically intensive applications on a VM.
9. Hyper-V Replica:Hyper-V Replica is a new feature of Hyper-V 3 and is somewhat comparable to VMware vSphere Fault Tolerance. Hyper-V Replica will asynchronously replicate virtual machines from one Hyper-V host to another over an IP network. The process is configured at the VM level, so it’s not an all-or-nothing proposition. The technology tracks write operations on the source machine and replicates them to the destination VM so that both VMs are in constant lockstep. If one VM fails, the replica takes its place without missing a ping–a pretty cool enterprise-class feature.
I am really excited about Hyper-V 3, and I hope Microsoft continues its innovation trend and directs more attention toward alternatives for the parent partition approach and to building a better clustered file system. CSV is not bad, but I think the natural evolution is a really solid, scalable file system. Hyper-V 3 will be the first real challenger to VMware vSphere 5, so let’s see how VMware responds. I think competition in this space will continue to drive innovation, and the customer will definitely be the ultimate winner.
This column was orignally posted on InformationWeek.comPosted in: Hyper-V
PC Applications in The Post-PC Era - 2011.10.06
I subscribe to the school of thought that we’re already in the post-PC era, simply based on the number of mobile devices we support. That point may be arguable, but one thing is not up for debate: PC-based applications, specifically those that run on Windows, are going to be around for a very, very long time, especially in large enterprises. Yes, we hear a lot about SaaS and Web-based alternatives, but who among us doesn’t have some legacy software that we have to keep running?
Most IT teams have struggled to marry new devices, mostly tablets and smartphones with small displays and touch-screen keyboards, with Windows operating systems and the applications that love them. The main sticking point is that Windows is a point-and-click interface. Some smartphones, such as the Motorola Atrix, allow users to dock a phone in a laptop shell, thereby giving access to a full laptop screen and keyboard. Celio offers a Redfly mobile shell and dock. That is, however, another piece of equipment users have to carry. Newer phones also have some sort of video output, like HDMI, that would allow the projection of the phone’s screen onto a larger display, provided such a display is available.
The form-factor problem is another issue. I don’t believe anyone enjoys working on a Windows desktop from a smartphone screen, so people will still carry multiple devices when they move around — a smartphone, a tablet for meetings or on a plane, maybe a laptop PC or Mac just in case.
This problem isn’t going to go away anytime soon, especially because vendors like Citrix and Microsoft are releasing software that works or will soon work on any device, from Android to iOS and Windows Mobile Phone, all the way to BlackBerry and HTML5; users will be able to connect to PC-era applications leveraging VDI and other technologies. Your users may like seeing a Windows desktop or application on their favorite mobile devices, but this is just perpetuating the problem.
In response, many enterprises that have deployed desktop virtualization offer Bluetooth keyboards and mice for their tablet users to maximize the experience, but is that really the solution? There has to be a better way of addressing a PC-era computing architecture with the post-PC-era mobility frenzy.
We expect more vendors to start playing in this space, and we’d like to offer a suggestion: Figure out a way to zoom and project the keyboard and screen onto a larger surface, like a holographic display, that can be resized and that allows users to control the brightness and contrast. The technology exists. Now all of a sudden, that smartphone and VDI just became the ultimate computing device for PC-era and post-PC-era applications. We can use the full-size keyboard and holographic display when using point-and-click applications like Word or PowerPoint. The phone is always connected with Wi-Fi and 4G connectivity, so all social media and SaaS applications are available. What else would a road warrior need?
VDI has solved the problem of running Windows apps on smartphones. Now we just need those few missing pieces. We’ll be watching to see what innovations arise.
As written by Sigma’s Technology Officer, Elias Khnaser, for Information Week
One of the biggest hurdles for desktop virtualization adoption is price. Through all my interactions with customers, I am always hearing: “I heard it was more expensive, I heard there are no cost savings,” etc. So, let’s compare a desktop virtualization rollout versus a traditional physical desktop rollout and see if it truly is more expensive.
That being said, keep in mind that from a CapEx expenditure stand point, you will not see much savings. But you will see significant OpEx savings. Usually when I say this, customers will say, “My CFO does not care about OpEx, we can quantify OpEx, we can’t touch it.” I say, have a little more faith. I will accept that argument and respond to you as follows: While it is not easy for every organization to quantify or justify OpEx, the next time your manager needs a project completed in a week and you have no cycles or your current employees have no cycle, the only option is to hire more help or use consultants.
The next time your CFO’s laptop breaks down and it takes two days (being generous) to replace it and bring him back to productivity, the next time your CFO or CIO flames you for not providing adequate technical support or timely technical support to the user community which is generating money for the business, at that point you can reply to them by saying, “We have no cycles, we have been supporting our dispersed and remote user community for years using dated methods; we need a change.” At that point, the OpEx will all of a sudden look very lucrative.
Let’s proceed with the following scenario: Gordon Gekko Enterprises has 1,000 physical desktops that are 7 years old running Windows XP and are up against a hardware cycle refresh and an operating system upgrade to Windows 7. Let’s also assume the company has done its homework and knows the benefits of desktop virtualization. The company is interested in a ball park price comparison between physical and virtualized desktops. To accomplish this, I am choosing the VDI type of virtualizing desktops. While there are other types that can be used to lower the cost, I’m going to assume worst-case scenario.
The company ran all the proper assessments and identified 15 IOPS per user as an acceptable number. (We’ll keep it simple and not go into the different profiles etc…) The company has also identified that it wishes to give each VM running Windows 7 2 GB of memory and 1 vCPU. Again, we are going to ignore application delivery and assume they have that figured out. The company has also identified that they wish to use shared storage in the form of a SAN and have taken the proper steps to avert bootup and login storms as well as anti-virus storms, etc…
Gekko wants to use blade technology to support this environment and its calculations and risk factors accept 60 VMs per host. The math would be as follows:
1000 VMs / 60 VMs per host = 16 hosts
Considering 2GM memory per VM, this would translate into 120GB (128 of course is the right configuration) memory per host. The following tables show the TCO.
Table 1. Desktop Virtualization TCO Table 2. Physical Desktop Rollout TCO
When reading these numbers, you can of course draw your own conclusions. Still, I want to discuss a few here and invite comment from you.
Now keep in mind I have put a lot of thought into this, so read the numbers carefully. I have also been very generous with these numbers.
For example, you can get more and better special pricing on servers from manufacturers than you can on desktops. Also note that I listed the cost of acquiring 1,000 new thin clients as optional, simply because you can turn your existing 7-year-old machines into thin clients and use them until they break, and so on.
I did want to list the cost of acquiring 1,000 new thin clients because I was always criticized about ignoring that number. So, for new companies that are just being formed that want to deploy desktop virtualization and have no equipment I have taken that into consideration as well.
I have provided these numbers to ruffle some feathers and stir up some healthy conversation and invite comment to get everyone’s perspective, I really would love to take the pulse of our readers as it pertains to desktop virtualization cost.
Written by Sigma’s Technology Officer Elias Khnaser for Virtualization ReviewPosted in: Virtualization
Citrix and The Atrix - 2011.09.09
For the past 18 months, I have been speaking publicly about desktop virtualization, and at every conference I keep stressing the inevitability of the smartphone making a significant impact on desktop virtualization.
If we break down the components of a smartphone today, we end up with a mini-computer. If I have that much power in the palm of my hand, why can’t I use it to power other devices? Why can’t I use it as a thin client? Except, the smartphone has an advantage over traditional thin clients, it has a 3G or 4G signal which means it has built-in internet access. Now if I have Internet access anywhere I go, I can access my DVI (Desktop Virtualization Infrastructure) desktop from anywhere, anytime. Now that is cool. However, let’s take it one step further, instead of the concept of Bring Your Own PC (BYOP), let’s keep the same acronyms but say Bring Your Own Phone. Now organizations can spend money to purchase these smartphones and have docking stations at desks that extend them to monitors and keyboards. Connect them to Wi-Fi at work. Do we really need to be wired to the desktop? No, that saves on switching infrastructure and cabling and much more while investing more in wireless access points.
The Motorola Atrix 4G is a huge step in the right direction for DVI enthusiasts and the fact that Citrix supports the device with the Citrix Receiver reinforces its advanced position in this market. Organizations are on the verge of a Windows 7 upgrade and a hardware refresh on desktops as well. Some are even on the verge of a mobile phone upgrade for users. A pretty large undertaking. What if they change their way of thinking a bit and instead of refreshing everything, refresh the phones with the Atrix or the likes of Atrix? The peripherals like keyboard, mouse and monitor should be pre-existent – all you have to do is build an infrastructure that can support DVI and the question of off-line access has solved itself.
Instead of finding a way to enable users to work offline, we found a way to keep users online with fewer devices and no complex setups. Sure, one can argue that on a plane, we still don’t have signal, but one can also argue that more and more planes have Wi-Fi nd it is just a matter of time before it becomes standard. Let’s face it: we live in a connected world, let’s change our way of thought and move forward, being off-line is not an option anymore. While that may have been the case 10 years ago, it is not today. Today, if you are not connected, you are not productive.
The Atrix is just the beginning, the next step will be to create shells for the phone, for example, why should I buy an iPad? Why can’t I just buy a shell that looks like the iPad and slide my iPhone in it to light up all the features of an iPad? Why can’t I get into my car, slide my phone in and that lights up my navigation and everything else I get from my in-car entertainment system today?
Tablets are not the future, you know what is? Smartphones.
Written by Sigma’s own Elias Khnaser. Contributor to Forbes
The History of Wireless – Part 1 - 2011.07.07
The very first 802.11 wireless networking standard was ratified in 1997. These first wireless networks were very slow, and barely usable. Early 802.11 used FHSS modulation and could only achieve speeds of 1 and 2Mb. It wasn’t until 1999 when 802.11b was ratified that wireless networking began to really catch on and speed up. Around the same time, 802.11a access points were available and could support wireless speeds of up to 54Mbps, but 802.11a didn’t catch on with enterprise customers or home users since it was more expensive, and there weren’t nearly as many client devices that supported the 802.11a (5GHz) frequencies. This pattern of wireless adoption leaning towards 2.4GHz continued on for many years.
In 1999 you could only hope for 2.4GHz wireless speeds of a theoretical 11Mb, but more like 5.5 actual throughput due to the half-duplex nature of wireless technology. The DSSS data rates supported speeds of 1Mb, 2Mb, 5.5Mb and 11Mb. When OFDM for 2.4GHz was released in 2003 the additional data rates of 6, 9, 12, 18, 24, 36, 48, 54 became available in the 2.4GHz frequency. Four years earlier 802.11a had been able to support the same speeds, but there were simply more 802.11b/g client devices available.
With the ratification of 802.11n finally happening in 2009, the 2.4GH frequencies are now capable of the additional speeds when using 20Mhz wide channels of 7.2, 14.4, 21.7, 28.9, 43.3, 57.8, 65 and 72.2. The real speed increases of 802.11n can be realized when two channels are bonded together into a 40Mhz wide channel to double the theoretical throughput to speeds such as 15, 30, 45, 60, 90, 120, 135 and 150. Of course, there are still only three non-overlapping 2.4Ghz channels (1, 6, and 11) so bonding channels together in the 2.4GHz spectrum quickly leaves you with little room for a non-overlapping channel plan. Utilizing the 5GHz spectrum for 40Mhz channel bonding is the obvious choice. The 5GHz spectrum allows for at least 12 non-overlapping channels (depending on the country codes in use).
Early 1 & 2Mb wireless networks usually did not incorporate antenna diversity into the design, but even as early as 1999 access points were designed with antenna diversity capabilities. Antenna diversity is used to increase the odds that you receive a better signal on either of the antennas. This only becomes more important as you can see in 802.11n access points. MIMO (Multiple Input, Multiple Output) antennas are integral to achieving 802.11n wireless speeds.
Higher throughput via 802.11n is possible with multiple antennas as well as access points that are capable of sending multiple data streams. The number of spatial streams an access point is capable of supporting is represented by a X b : c. (a) represents the number of transmit antennas (b) is the number of receive antennas, and (c) is the maximum number of spatial streams the access point/radio can support. An access point identified as 3×3:2 has three antennas for transmitting, three for receiving and is capable of sending two concurrent spatial streams. It is possible to achieve data rates up to 600 Mbit/s with four spatial streams using a 40 MHz-wide channel. Of course this also now means you need to use a gigabit switch to connect your access points to the LAN or you’re creating a potential network bottleneck at the switch port.Posted in: Wireless
The Wireless Control System Configuration Guide goes over how to manage RF Calibration Models, but it does not however describe how long the process takes, or what exactly it entails. I will endeavor to describe the process according to how I’ve calibrated RF models. I do not know if how I’m doing this is correct, this has been a matter of trial and error. You don’t get the opportunity to calibrate RF deployments too often, and the number one reason for that is most likely how long it takes to complete the calibration.
I haven’t had much luck using the linear calibration model, so I use the point calibration model instead. I configure my wireless card to operate as an 802.11a client for one set of point calibrations throughout the facility, then I configure it to operate as an 802.11b/g client (only) for the second pass at the calibration process.
I don’t stop calibrating the floor area until I have covered the floor area with data points from one corner of the floor to the other. I don’t know if this is necessary given the paragraph above, but the data collected across the floor area appears as “complete” to a customer reviewing the RF calibration.
Recently I did a full calibration of a 34,000 square foot facility. The deployment consisted of 11 3500i series CleanAir access points. The time to calibrate from beginning to end was approximately 4 hours. Two hours to calibrate for the 5GHz frequency, and two hours to make a second pass to calibrate for the 2.4GHz frequency. Each point calibration location sampling took at least two minutes to complete.
Neither of the design/configuration guides tells you exactly what you’re supposed to do with the laptop when you’re using the point collection model, unless you’re really supposed to pirouette while holding the laptop. I tried to follow this example for the first calibration I did – it just ended up making me dizzy. Now I stand in one place and change the laptop orientation while changing the direction I’m facing. I’ve found that if I hold the laptop in the same orientation the data point collection fails quite often.
The Wi-Fi Location-Based Services 4.1 Design Guide states:
“Due to an open caveat1 concerning the use of dual-band calibration clients and performing a location calibration data collection on both bands simultaneously, it is recommended that calibration data collection be performed for each band individually at this time. When using a dual-band client, use either of the following alternatives:
- Perform the calibration data collection using a single laptop equipped with a Cisco Aironet 802.11a/b/g Wireless CardBus Adapter (AIR-CB21AG) on each band individually. For example, proceed to disable the 5 GHz band and complete the data collection using the 2.4 GHz band only. Then, disable the 2.4 GHz band and enable the 5 GHz band, and proceed to repeat the data collection using the 5 GHz band only.
- Perform the calibration using two people and two laptops. Each laptop should have a Cisco AIR-CB21AG and be associated to the infrastructure using a different band. The two calibration operators may operate independently; there is no need for them to visit each data point together. In this way, a complete calibration data collection can be performed across both bands in half the amount of time as option #1 above.”
“Temporarily disable Dynamic Transmit Power Control (DTPC) prior to conducting calibration data collection. DTPC must be disabled separately for each band using either the controller GUI, the controller CLI or WCS for each controller whose registered access points are expected to participate in calibration data collection. After calibration data collection has been performed, DTPC should be re-enabled for normal production operation.
Ensure that the WLAN to which your calibration client will associate is configured to support Aironet Information Elements (Aironet IE). Doing so will enable the use of unicast radio resource measurement requests during calibration data collection for more efficient operation.”
According to the WCS Configuration guide: “Only Intel and Cisco adapters have been tested. Make sure the Enable Cisco Compatible Extensions and Enable Radio Management Support are enabled in the Cisco Compatible Extension Options.”
Also of note from the WCS Configuration guide: “The calibration status bar indicates data collection for the calibration as done, after roughly 50 distinct locations and 150 measurements have been gathered. For every location point saved in the calibration process, more than one data point is gathered. The progress of the calibration process is indicated by two status bars above the legend, one for 802.11b/g/n and one for 802.11a/n.”Posted in: Cisco
Day One – Wireless Tech Field Day - 2011.03.17
The morning started off with a great Chanalyzer Pro demonstration by the great people atMetaGeek, Ryan Woodings, & Trent Cutler were awesome at explaining the ins and outs of all aspects of the MetaGeek company origins and how to customize the Chanalyzer Pro application. I had previous experience using the ChanalyzerPro application since Ryan was kind enough to send me a Wi-Spy dBx and I tested it out and compared it against AirMagnet’s Spectrum XT and Cisco’s Spectrum Expert tool.
I was not aware that there were home sound systems that could be installed in light fixtures, and hadn’t thought of using a Wi-Spy to identify an absconding shooter by find security cameras in the vicinity of a convenience store crime scene.
There have been a lot of advances to the Chanalyzer application since my demo license expired, but we were all gifted a cool lunchbox with all the MetaGeek tools inside, so I’ll be back to using their ChanalyzerPro application asap!
Cisco started off with David Stiff presenting the Cisco CleanAir solution. I’ve heard this presentation many times, and I’ve presented it several times as well. Based on some of the questions that were asked by other delegates – they were not as familiar with the CleanAir/WCS/Client Troubleshooting tool as I was. I was glad that the information wasn’t a repeat for everyone present.
Funny facts – the Cisco WNBU development team has code names for internal & external antennas: internal antennas are named after soaps, and external antennas trees. The AP I spotted with the code word written on it was called Larch. Naturally I thought of the Monty Python Sketch ‘How to Recognize Different Types of Trees From Quite a Long Way Away’
I’ll be adding to this post with information about the MobileAccessVE Multi-Tier architecture and whatever great information Jameson Blandford (Cisco YouTube Star) will divulge to the Tech Field Day delegates.
Gestalt Tech Field Day - 2011.03.05
n a matter of just a few days, the first ever wireless focused Gestalt IT Tech Field Day will be kicking off in San Jose. The event is scheduled for March 17th & 18th, and all the last minute details are being finalized!
I’ve been making a list of questions about why things don’t work a certain way, or will this ever be possible based on the questions I’ve been asked at customer sties. I’m hoping that the questions I can’t answer (and defy googling) will be answered by one of the sharp minds attending or presenting at the Wireless Tech Field Day.
I know for a fact that all of the delegates reached out to their industry connections to explain why they should sponsor this event. Many emails were sent, many phone calls were made. I know I’ve called/emailed/tweeted every contact I’ve ever had at all of the wireless vendor companies I’ve ever worked with. Some were hard to track down, but I wasn’t going to give up until they’d said ‘No’ at least twice! I’ve been helping Stephen Foskett make this event happen because I’m so excited that an event like this can even be organized, and actually happen! However, Claire Chaplais is the person that really ties everything together behind the scenes, and makes this event come off without a hitch or a hiccup in the overall flow! Claire and Stephen are a great team, and there isn’t another event quite like a Tech Field Day. I’m very glad to have helped put Stephen and Claire in touch with the connections I have to make a Wireless Tech Field Day happen.
When was the last time you heard of competing wireless companies coming together to put their best Subject Matter Experts in front a group of wireless engineering professionals that have an aggregate of over 62 years of wireless experience! It is refreshing to see companies stand by their technology solutions and open themselves up to potentially difficult technical questions from the Wireless Tech Field Day delegates. Everyone wins.
All the information about the event, the sponsors, the delegates can be found on the Gestalt IT Wireless Tech Field Day page.
The event will also be streamed live from TechFieldDay.com so don’t forget to tune in!Posted in: Tech Field Day
HP E-Series Mobility Portfolio - 2011.03.03
HP has launched a new series of access points through a combined development effort with the HP/Colubris development teams. The new access point model numbers are the E-MSM460, E-MSM466 and the E-MSM430.
HP’s goal is to bring a ‘single pane of glass’ management capability to the wireless and wired networks through integrating the HP Mobility Manager 3.10 into the existing IMC solution. Mobility Manager can be a plugin to an existing PCM+ installation.
The biggest news to me was the AP MSM466, which is capable of concurrent radio operation in the 5GHz band. This allows the access point to increase the channel capacity to double the supported client count in high density deployments. This published statistics for this access point indicates a maximum performance of 450Mbps per radio. Using two 5GHz radios in an access point is interesting, but there are still a lot of 2.4GHz clients in use on most every WLAN. Having all your clients in a specific area being only 802.lla devices may be a reality for some enterprise deployments, but I’d bet that most have a wireless client mix that can’t be controlled or influenced by the IT department.
The HP mobility line can support different modes of operation – AP, Mesh and Monitor (packet capture) modes. The new features of the HP mobility hardware producte line are standards based beamforming (explicit) and band steering. There was no mention of the ability to do spectrum analysis with any of the HP access point offerings. The lack of spectrum analysis as part of their product offering does not allow the HP mobility portfolio to identify sources of interference. The HP mobility product line can only adjust the power and channel of the access point in reaction to sources of interference.
Mostly since the TxR:S numbers for each of the access points are not clearly stated on this slide. The Cisco 1142N access point is a 2×3:2, and the HP MSM410 is a 3×3:2 access point.
I found it interesting that the MSM410 performed only slightly better than the 1142N even though the radio in the MSM410 has three transmit and three receive antennas. The comparison difference is marked between the E-MSM460 and the Cisco 1142N due to the fact that the E-MSM460 is a 3×3:3 access point. The metrics on this chart show the E-MSM460 providing 150Mbs of throughput at a distance of 230 feet from the access point. This works out to be one access point every 1400 feet. If this distance is to be used as the gauge for the cell edge, that’s a pretty dense access point deployment!
One thing I found of note was the ability of the access point to be changed into an autonomous access point just by changing the operating mode on the access point from the controller. You’re not required to change the code running on the access point in order to make the access point function independent of the controller.
The HP mobility solution does not use the CAPWAP standards-based protocol for their controller based solution. HP uses a proprietary wireless protocol that is based on IAPP and using OpenVPN with UDP tunnels in order to simplify network connectivity on LANs using NAT.
This mobility announcement from HP will be great news for existing HP mobility customers, but I am doubtful that customers with an already deployed WLAN infrastructure will find enough compelling features to make the switch to the new HP E-MSM product line. However, some customers may require the cost benefit of the next day replacement that is part of the HP lifetime warranty.