Skip to main content

Posts

Showing posts from 2014

SDN and SmartNIC

SDN or programmable network is quickly being split into three separate problems. First is programmable management plane, second is programmable control plane and programmable data plane. A lot of time is spent on discussion of programmable management and control. But very little time is spent on programmable data plane.

Whatever overlay (over L2 or L3) is used for multi-tenancy, unless all the tenants are in the same host, the packet needs to leave the host. This host interfaces with underlying fabric using a NIC of variable complexity. Today the fabric is not multi-tenant and NIC is not really programmable.

The multi-tenant fabric is a focus of academic research. As always they will come up with a new tag that needs to be inserted after existing fabric tags. But its adoption will require the NIC based programmable silicon. This NIC needs be truly programmable (in  a programmable FPGA sense of the word and not SR-IOV VF sense of the word).

SOA and SDN - Lots in common

A decade and a half ago SoA drove an architectural change in software systems with the primary goal of loosening the tight bindings that existed between distributed components in a system. Then the tight binding was the language used (primarily Java). SoA was not a product, but it drove the roadmap of almost every product in the software industry. It created new products as well. Tight bindings made the software systems unscalable and brittle. SoA succeeded in that today's systems are more distributed and more scalable. Today's programmer has a choice of language. SDN too is not a product, it is an architecture. It too aims to loosen the tight bindings in the network systems - in this case a tight binding between the interface and the implementation (resulting in monopolistic markets). But the impact of SDN on the networking industry at least so far (4 yrs since OpenFlow) seems very limited. SoA drove a new serialization, new interface definition, new directory and a new wir…

Who is the cloud customer?

Will the real cloud customer please stand up? Most of marketing theory strives to correctly identity the "real" customer. Once the customer is identified it is a downhill task to cater to her/him. The problem is the real customer is always hiding. You would too if everyone wanted your money. Just like you don't answer the telemarketing calls at home and want to get on the do not call list, the real customer uses proxies to engage the vendors. In a cloud the customer is CIO who is still spending 84+% of his budget on fixed costs. That leaves him a mere 15+% to play with on his/her own discretion and most say of that only 5% is actually spent on innovation. His budget is under review every year from the CFO who has to keep showing a growing bottom line inspite of a flat to downward slopping top line. If you follow the S&P 500 companies where revenue growth for the last 3-4 yrs is under 5% (they are taking ORCL to the cleaners for missing even that today) and the di…

Protocol vs. API: OpFlex

An API and Protocol both enable communication between two (or more if bus is involved) endpoints that are well defined and have address reachability. But API wins over protocol because of the flexibility that it provides over a static protocol. This is especially true in infrastructure management. With an API interface, one can support multiple management models including but not limited to programmatic (rpc, messaging), declarative (all those ini files or config-t command on Cisco IOS).


A key initiative in the cloud/virtualization industry is to open source every component in VMware's ecosystem. It started with hypervisor, then jumped the stack and moved to cloud management suites. What is not standard and open source yet is a set or inventory of managed objects. I am talking like managed objects that can be queried like exists at a much primitive level in SNMP. vCenter has such as inventory of managed objects and the industry should endeavor to create a standard and open source…

L3 to Server

There is a trend towards moving switching (L2) out of the hypervisor and onto a NIC. After multiple false starts, it looks like it might actually happen.

Overlay networks essentially empower a virtual management tool or orchestrator to perform the control functions of a L2 network. The tool knows the full lifecycle of a VM and does not really need to learn any mac addresses. Coupled with a intelligent NIC and some standard based overlay (MacInIP), this tool can remove the need for a L2 switch in Hypervisor and facilitate bring L3 network directly to a server. If this happens, it will shift the network edge inside a server and shift the market power away from pure networkers to server vendors.

NfV Packet Processing

One of the key pillars on which the vision of NfV rests is optimized (meaning fast) cpu based packet processing. Sometime referred to as datapath processing in the cpu-mem complex without the need for any asic acceleration. No requiring any asic in the data path makes the cpu fungible (changed at will) and dramatically brings down the cost of network functions.

As we look at optimizations, we may also want to looks at why we have the ethernet I and II packet types and why the MTU min/max/jumbo packet sizes were arrived at. Do we really need the ethernet packet when the conversation is between two virtual machines on the same node?  Even across the host boundaries, but within the same data center, it is possible to have a conversation without having to create ethernet frames.

Hopefully we won't take existing VNFs and simply make OVFs out of them and run on a favorite hypervisor. Real NfV would involve refactoring the VNFs to use packet gateways for certain function and use standard…