Monday, November 03, 2014

SDN and SmartNIC

SDN or programmable network is quickly being split into three separate problems. First is programmable management plane, second is programmable control plane and programmable data plane. A lot of time is spent on discussion of programmable management and control. But very little time is spent on programmable data plane.

Whatever overlay (over L2 or L3) is used for multi-tenancy, unless all the tenants are in the same host, the packet needs to leave the host. This host interfaces with underlying fabric using a NIC of variable complexity. Today the fabric is not multi-tenant and NIC is not really programmable.

The multi-tenant fabric is a focus of academic research. As always they will come up with a new tag that needs to be inserted after existing fabric tags. But its adoption will require the NIC based programmable silicon. This NIC needs be truly programmable (in  a programmable FPGA sense of the word and not SR-IOV VF sense of the word).

Sunday, August 10, 2014

SOA and SDN - Lots in common

A decade and a half ago SoA drove an architectural change in software systems with the primary goal of loosening the tight bindings that existed between distributed components in a system. Then the tight binding was the language used (primarily Java). SoA was not a product, but it drove the roadmap of almost every product in the software industry. It created new products as well. Tight bindings made the software systems unscalable and brittle. SoA succeeded in that today's systems are more distributed and more scalable. Today's programmer has a choice of language. SDN too is not a product, it is an architecture. It too aims to loosen the tight bindings in the network systems - in this case a tight binding between the interface and the implementation (resulting in monopolistic markets). But the impact of SDN on the networking industry at least so far (4 yrs since OpenFlow) seems very limited. SoA drove a new serialization, new interface definition, new directory and a new wire protocol. SDN so far has done nothing of the sort. This could be because SDN is looking for a killer app. It is certainly not vanilla network management where it is applied today. SoA did not take off until a single webpage relied on 300 or more services to render the page. This technology was rapidly adopted in the social networking space. We are not seeing anything like this in networking yet. Cloud Networking or NFV may be those killer apps that SDN needs, but so far, we have not seen them drive any fundamental innovation into networking. SDN today is just a paper tiger. It needs a killer app and a real product to succeed. OpenFlow is increasingly looking like CORBA and not SOA.

Friday, June 20, 2014

Who is the cloud customer?

Will the real cloud customer please stand up? Most of marketing theory strives to correctly identity the "real" customer. Once the customer is identified it is a downhill task to cater to her/him. The problem is the real customer is always hiding. You would too if everyone wanted your money. Just like you don't answer the telemarketing calls at home and want to get on the do not call list, the real customer uses proxies to engage the vendors. In a cloud the customer is CIO who is still spending 84+% of his budget on fixed costs. That leaves him a mere 15+% to play with on his/her own discretion and most say of that only 5% is actually spent on innovation. His budget is under review every year from the CFO who has to keep showing a growing bottom line inspite of a flat to downward slopping top line. If you follow the S&P 500 companies where revenue growth for the last 3-4 yrs is under 5% (they are taking ORCL to the cleaners for missing even that today) and the divvys are expected to replace the lost income from the bonds (thanks to the pump from Fed), you will see very little room for the CFO to maneuver. Cloud offers the CIO more bang for the buck and hence this interest in cloudification of all assets. That pressure to reduce the fixed cost is driving software defined everything. Software can replace most of the fixed costs ('admins'), it can disintermediate the admin. This trend is driving a whole new industry in the area of configuration management and ever more user friendly self-service portals. So if you are designing a system that connects a super user friendly (read simplistic) to a sophisticated back-end (read complex system), you have to use a language that is different from the one you may have used to specify a menu driven desktop system.

Tuesday, June 03, 2014

Protocol vs. API: OpFlex

An API and Protocol both enable communication between two (or more if bus is involved) endpoints that are well defined and have address reachability. But API wins over protocol because of the flexibility that it provides over a static protocol. This is especially true in infrastructure management. With an API interface, one can support multiple management models including but not limited to programmatic (rpc, messaging), declarative (all those ini files or config-t command on Cisco IOS).

A key initiative in the cloud/virtualization industry is to open source every component in VMware's ecosystem. It started with hypervisor, then jumped the stack and moved to cloud management suites. What is not standard and open source yet is a set or inventory of managed objects. I am talking like managed objects that can be queried like exists at a much primitive level in SNMP. vCenter has such as inventory of managed objects and the industry should endeavor to create a standard and open source version of this inventory of managed objects.

Recently I came across a draft from Cisco on Opflex. I think the authors are attempting to create open src version of inventory of managed objects, but doing so in a very inflexible way i.e. creating a protocol for communication between endpoints. It would have been more useful had this initiative tried to create a framework with abstract objects that could be customized for the use case. The framework could offer a few basic services and define a few primitives. One of that service could have been endpoint registry, another could be endpoint profile. While it is positioned as a distributed system, it includes a domain which bring a domain controller into the picture.

What the cloud/virtualization industry needs is not another protocol but a SNMP like system that is open source but works at a much higher layer of abstraction instead of just device. We need to create a open source version of inventory of managed objects that can managed virtual resources that could run on any hypervisor.

Wednesday, March 26, 2014

L3 to Server

There is a trend towards moving switching (L2) out of the hypervisor and onto a NIC. After multiple false starts, it looks like it might actually happen.

Overlay networks essentially empower a virtual management tool or orchestrator to perform the control functions of a L2 network. The tool knows the full lifecycle of a VM and does not really need to learn any mac addresses. Coupled with a intelligent NIC and some standard based overlay (MacInIP), this tool can remove the need for a L2 switch in Hypervisor and facilitate bring L3 network directly to a server. If this happens, it will shift the network edge inside a server and shift the market power away from pure networkers to server vendors.

Wednesday, January 15, 2014

NfV Packet Processing

One of the key pillars on which the vision of NfV rests is optimized (meaning fast) cpu based packet processing. Sometime referred to as datapath processing in the cpu-mem complex without the need for any asic acceleration. No requiring any asic in the data path makes the cpu fungible (changed at will) and dramatically brings down the cost of network functions.

As we look at optimizations, we may also want to looks at why we have the ethernet I and II packet types and why the MTU min/max/jumbo packet sizes were arrived at. Do we really need the ethernet packet when the conversation is between two virtual machines on the same node?  Even across the host boundaries, but within the same data center, it is possible to have a conversation without having to create ethernet frames.

Hopefully we won't take existing VNFs and simply make OVFs out of them and run on a favorite hypervisor. Real NfV would involve refactoring the VNFs to use packet gateways for certain function and use standard RPC/IPC mechanisms for others.

Costs in Training LLMs

 I went through the Llama-2 white paper that was released with the model by meta. I was hoping to learn some special technique they may be ...