Wednesday, 29 April 2015

Linux vendor Cumulus rolls out management pack

Based on Cumulus network operating system code, designed for consistent rack management

LAS VEGAS -- Linux network operating system developer Cumulus Networks this week at Interop rolled out a management platform that provides a common interface and operational process for data center racks.

Six TED Talks that can change your career

Of the hundreds of TED talks available online, many are geared toward helping people view life in a new
Read Now

The Cumulus Rack Management Platform is based on the company’s Cumulus Linux network operating system code base. Out-of-band management switches running Cumulus RMP may be managed by the same Linux toolsets as both servers and data-plane switches running Cumulus Linux, the company says.

Cumulus RMP will be available through providers of open rack management switches. Penguin Computing will deliver the first platform supporting the Cumulus RMP operating system, the Penguin Arctica 4804ip. The 4804ip will include the license for Cumulus RMP, a three-year warranty and three years of customer support.

Penguin Computing currently has five platforms on the Cumulus Linux hardware compatibility list including 1GbE, 10GbE and 40GbE switches. Penguin this week said it will also support Pica8’s network operating system.

Cumulus also announced that Supermicro has joined the company’s Open Hardware program, and is now supporting Cumulus Linux on Supermicro bare metal switches. Cumulus’ hardware compatibility list now includes 25 switches from seven partners, including Dell and HP.

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Monday, 20 April 2015

Lessons from Altoona: What Facebook's newest data center can teach us

How can Facebook's data center design apply to your data center plans?
Over the past year, Facebook has thrown some interesting wrenches into the gears of the traditional networking industry. While mainstream thinking is to keep most details of your network operations under wraps, Facebook has been freely sharing its innovations. For a company whose business model is built on people sharing personal information, I suppose this makes perfect sense.

What makes even more sense is the return Facebook gets on their openness. Infrastructure VP Jason Taylor estimates that over the past three years Facebook has saved some $2 billion by letting the members of its Open Compute Project have a go at its design specifications.

But what really turned heads was last year's announcement of Wedge, an open top-of-rack switch developed with the OCP community. Wedge was followed eight months later by 6-Pack, a modular version of Wedge purposed for the network core. Added to these bare-metal switches is FBOSS, an open Linux-based network operating system (well, not exactly an operating system – more on that in a later post), and OpenBNC for system management.

Why this openness matters to the rest of us is that all of this is not just a mad-science project within Facebook's lair. You can soon buy Wedge through Taiwanese switch manufacturer Accton, bringing switches into your data center for a fraction of the cost of proprietary switches with integrated operating systems. And you're not locked in to running FBOSS on the switch either. You can shop around, choosing the NOS that makes the most sense to you, such as Open Network Linux, Cumulus Linux, Big Switch Switch Light, and possibly others such as Pica8's PicOS or even Juniper's JUNOS. If you have an intrepid team of developers with time on their hands you can even build your own.

I'll write more about open switches and open software in subsequent articles, but for now I want to focus on what Facebook has been sharing about their innovations in data center network design and what it means for you. Last November, between the announcements of Wedge and 6-Pack, Facebook opened its newest data center in Altoona, Iowa. And as it has done with its other network innovations, Facebook openly shared its new design.

It turns out that there are some valuable takeaways from the Altoona design that can be applied to data centers of any size.

Hyperscale Misconceptions
Say "hyperscale data center" to most anyone who keeps up with such things, and they'll reflexively name Facebook, Google, and Amazon. And because of this association, people think of hyperscale as something that applies only to mammoth data centers supported by an army of developers.

In reality, hyperscale just means the ability to scale out very rapidly. A hyperscale data center network might be small, but it can grow exponentially larger without changing the fundamental components and structures of the network. You should be able to use the same switches and the same interconnect patterns as you grow – just more of them. You do not need to throw out one class of switches for another just to accommodate growth.

You can have a data center consisting of just a few racks, and if the network is designed right it is a hyperscale data center. Hyperscale is a capability, not a size.

Another misconception about hyperscale data centers is that they are optimized for one or a relatively few applications at massive scale across the entire data center. This stems particularly from the Facebook and Google associations. Hyperscale designs are in fact ideal for very heavy east-west workloads, but hyperscale design principles can apply to an average enterprise data center, supporting hundreds of business applications just as easily as it supports a single social media, big data, or search app.

Hyperscale also conjures up images of do-it-yourself networks built from the silicon up by a cadre of brilliant young architects commanding salaries far out of reach of the average network operator. That might be true of the innovators, but because Facebook has laid its work right out on the table, mere mortals like you and I can put their design principles to work in our own data centers.

To appreciate the significance of the Altoona network, let's first have a look at the network architecture Facebook is using in its earlier data centers.

Good is not good enough: Facebook's cluster design
Figure 1 shows Facebook's pre-Altoona aggregated cluster design, which they call the "4-post" architecture. Up to 255 server cabinets are connected through ToR switches (RSW) to high-density cluster switches (CSW). The RSWs have up to 44 10G downlinks and four or eight 10G uplinks. Four CSWs and their connected RSWs comprise a cluster.
041415 figure 1

Four "FatCat" (FC) aggregation switches interconnect the clusters. Each CSW has a 40G connection to each of the four FCs. An 80G protection ring connects the CSWs within each cluster, and the FCs are connected to a 160G protection ring.

This is a good design in several ways. Redundancy is good; oversubscription is good (generally 10:1 between RSWs and CSWs, 4:1 between CSWs and FCs); the topology is reasonably flat with no routers interconnecting clusters; and growth is managed simply, at least up to the 40G port capacity of the FCs, by adding new clusters.

But Facebook found that good is not good enough.
Most of the problems with this architecture stem from the necessity of very large switches for the CSWs and FCs:

With just four boxes handling all intra-cluster traffic and four boxes handling all inter-cluster traffic, a switch failure has a serious impact. One CSW failure reduces intra-cluster capacity by 25%, and one FC failure reduces inter-cluster capacity by 25%.
Very large switches restrict vendor choice – there are only a few "big iron" manufacturers. And because these few vendors sell relatively fewer big boxes, the per-port CapEx and OpEx is disproportionately high when compared to smaller switches offered by a larger number of vendors.
The proprietary internals of these big switches prevent customization, complicate management, and extend waits for bug fixes to months or even years.
Large switches tend to have oversubscribed switching fabrics, so all ports cannot be used simultaneously.
The cluster switches' port densities limit the scale and bandwidth of these topologies, and make transitions to next-generation port speeds too slow.
Facebook's distributed application creates machine-to-machine traffic that is difficult to manage within an aggregated network design.

The individual pods are connected via 40G uplinks to four spine planes, as shown in Figure 3. Each spine plane can have up to 48 switches. Key to this topology is that the fabric switches each have an equal number of 40G downlinks and uplinks – maxing out at 48 down an 48 up – so the fabric is non-blocking and there is no oversubscription between pods. Bisectional bandwidth, running to multi petabits, is consistent throughput the data center.

The diagram in Figure 3 shows the color-coded connections between fabric switches and their corresponding spine planes, but doesn't do justice to how it all ties together. And something that surely strikes you is that there are a lot of links between fabric switches and spine switches. Optics and cables can become expensive, so it's important to manage the distances between pods and spine planes. (If you're interested in learning more about Facebook's architectures, here are the source documents I used for cluster architecture (PDF) and the Altoona architecture.)

If you rotate the pods and line them up, the way the 48 racks of each pod would be arranged into rows in the data center, and then do the same with the spine planes – but lining them up perpendicular to the pods – you get the three-dimensional diagram shown in Figure 4, with the fabric switches becoming part of the spine planes. Distance between fabric switches and spine switches are reduced. Note that there are also edge pods, which provide external connectivity to the fabric.

Facebook network engineer Alexey Andreyev describes the fabric this way: "This highly modular design allows us to quickly scale capacity in any dimension, within a simple and uniform framework. When we need more compute capacity, we add server pods. When we need more intra-fabric network capacity, we add spine switches on all planes. When we need more extra-fabric connectivity, we add edge pods or scale uplinks on the existing edge switches."

If you want to hear Andreyev describe the Altoona architecture himself, here's an excellent video:

Altoona Takeaways

You might be wondering by now what any of this has to do with you and your data center. After all, Facebook is supporting more or less a single distributed application generating machine-to-machine traffic spanning its entire data center. You probably don't. And while a 48-rack pod is a scale-down from their earlier clusters, most enterprise data centers in their entirety are smaller than 48 server racks.

So why should you care? Because it's not the scale. It's the scalability.
The fundamental takeaways from the Altoona design are the advantages of building your data center network using small open switches, in an architecture that enables you to scale to any size without changing the basic building blocks. First look at the switches. You don't have to wait for Wedge or 6-Pack to go on the market (Accton will be selling Wedge soon). You can pick up bare-metal switches from Accton, Quanta, Celestica, Dell, and others for a fraction of the cost a big-name vendor will charge. For example, a Quanta switch with 32 40G ports lists for $7,495. A Juniper QFX5100 with 24 40G ports lists for a little under $30,000. Is that a fair comparison? That JUNOS premium gives you a pretty awesome operating system, but the bare-metal switch gives you a bunch of options for loading an OS of your choice.

As for the pod and core design, that can be adjusted to your own needs. The pod can be whatever size you want; while the "unit of network" is a wonderful concept, it's not a rule. You can create a number of pod designs to fit specific workflow needs, or just to start a migration away from older architectures. Pods can also be application specific. As your data center network grows, or you adopt newer technologies, you can non-disruptively "plug in" new pods.

The same goes for the core part. You can build it at layer 2, or at layer 3. It all depends on the workflows you're supporting. Using a simple pod and core design you can manageably grow your data center network at whatever rate makes sense to you, from a new pod every few years to an explosive growth of new pods every few months.


Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Tuesday, 7 April 2015

Microsoft’s new Spartan browser for Windows 10

Here’s what sets Spartan apart from Internet Explorer.

Spartan
The most recent Windows 10 Technical Preview comes with Spartan, a web browser that will eventually replace Internet Explorer. It’s not an updated version of IE under a different name; it's a new browser that Microsoft built from scratch. Here's what sets Spartan apart from Internet Explorer.

New name
For the time being, the browser is officially referred to as Project Spartan, and "Spartan" may or may not be its final name when it’s released with Windows 10. Sure, unlike "Internet Explorer," the name doesn’t strongly imply that this program is for browsing the Internet, but neither do the names Chrome, Firefox, Opera or Safari. So we think "Spartan" sounds like it would fit in perfectly among its competitors.

Internet Explorer lives on
Microsoft's previous, and not much-loved browser will still be included in Windows 10, in case you need to visit sites or use web services that absolutely require it. This will probably apply mostly to enterprise users. The current Windows 10 Technical Preview doesn't list Internet Explorer on the desktop or taskbar. It's hidden under Windows Accessories in the Start Menu.

Spartan is the default
Spartan will be set as the default browser in Windows 10. This status can be changed by another browser, like Chrome or Firefox, to take over the role as the default. There isn’t a way to set Spartan back to default within its own settings. To do this you have to go to the new Windows 10 Settings app.

Spartan features Edge
rendering engine

Not only will Windows 10 come with two web browsers, each browser will use a different rendering engine. IE will still use Trident, while Spartan comes with the faster and more technologically up-to-date successor Edge. Originally, Microsoft considered stuffing both engines into their new browser, but elected not to, in order to better clarify the separation between the two browsers: IE would be sticking around for backward compatibility.

Spartan becomes Windows app
Is Microsoft's new browser a Windows app or desktop application? It appears to be the former. In Windows 10, users will interact with Windows apps on the desktop environment in resizable windows; the overall feel from using Spartan suggests it is such an app. It also shares the same design language as the other new, resizable Windows apps coming to Windows 10, such as the Store and Maps apps, as seen in its title bar and borderless frame.

Spartan has cleaner, simpler look
As its name implies, compared to IE, Spartan sports a cleaner looking UI with a borderless viewing pane and simpler graphical elements in the toolbar. This minimalism is also evident under its settings menu, which displays things in large text and isn't cluttered with several options. In a side-by-side matchup, Spartan’s GUI initially looks similar -- the main differences are that Spartan's has fewer colors and slightly larger toolbar icons, but its tabs are set over the toolbar, as opposed to the way IE does it by setting tabs within the toolbar. Spartan’s arrangement of tabs looks less confusing.

Spartan has link sharing feature
This is a minor feature, but one that isn’t in the latest IE. In Spartan, you can send a link directly to another Windows app, such as OneNote or the Reading List.

Cortana is integrated into Spartan
Microsoft's personal digital assistant Cortana will come with Windows 10. It's similar to Apple's Siri or Google's Google Now, where, basically, you speak aloud a command or question and the technology will scour the Internet for your requested information, sometimes speaking out what it finds in a digital voice. Cortana's features are integrated into Spartan but, as of this writing, can be accessed only in the US versions of the latest Windows 10 Technical Preview, but Microsoft plans to expand its availability to other countries soon.

Spartan will likely support extensions
Firefox has add-on functionality, while Chrome refers to its equivalent feature as extensions. Under Spartan, add-ons appear to refer to plugins for running multimedia technologies, like Flash. It's been reported that Spartan’s final release will have extension support similar to Chrome, so developers will be able to write tools to enhance the usability of the browser.

Spartan has 'reading view' for smaller screens
Spartan can re-render certain pages to display only the main body of text and a related image, stripping out extraneous graphics and text from the original layout. This is meant to make an online article more legible and visually comfortable to read, especially on a tablet. To do this, you click the open-book icon to the right of the URL address bar. This function isn’t available when this icon is grayed-out: Not every page is able to be stripped down to its essentials. Spartan’s reading view tends to be available when you visit a page showing an article or blog entry, but not always.

Spartan integrates with Web Note drawing tool
This ballyhooed feature lets you draw right onto a page, doodling over it or jotting handwritten notes (if you are using a digital pen on a Windows 10 tablet). But technically what Web Note does is capture an image of a page, and then give you basic drawing and highlighting tools. You can also annotate the image with notes you type in, and copy the image of the page, or portions of it, so that you can paste it into a document or image that you’re editing in another program. Pages can be saved as a favorite (bookmark), added to the browser’s reading list, or forwarded to other Windows apps through Share.

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com