A key element of the infrastructure that will form the cloud is resource reservation and a protocol that enables applications to reserve the resources. Without this element we can't have a credible SLA offering. But this reservation system has to be integrated into the billing system as well as the customer entitlement system.
Today's oversubscription systems allow contending processes to carry entitlements, however those entitlements have no basis in economic value of the user who initiated the process. For example, in VI, one can allocate shares to compute elements but those shares do take into account the customer's SLA entitlements. Neither did I see anything in the recently released vCloud API anything that says someone is thinking about it.
Since the days of Cluster/Grid, we have been making pretty powerpoint slides showing the business value of IT to customers. Cloud is supposed to provide the mechanisms for the customer to harness/govern the business value.
Blog to discuss the digitization and network of everything that has a digital hearbeat
Friday, September 18, 2009
Thursday, August 06, 2009
Policy vs. Mechanism in Cloud
Existing architectures come embedded with their own policies with little control left to end user. It appealed to the enterprise customer as they only had to learn the knobs and how much to turn it before the product starts to smoke. This is about to change in Cloud Computing.
The forceful intermediation of an economic model into the use of an application (which the main difference between cloud and a cluster-grid) is disaggregating the policy definition point or PDP into multiple tiers. This is similar to what we saw happen to policy enforcement during the development of 3-tier datacenters in late 90s. A policy defined at the CSP level will be inherited, extended and enforced at the enterprise IT level and further changed and extended at the end user level. This requirement of the cloud will create a bias in the architecture of a system towards mechanism and policy negotiations.
The policy vs. mechanism was a hot debate in early 90s and looks likely to return once again.
The forceful intermediation of an economic model into the use of an application (which the main difference between cloud and a cluster-grid) is disaggregating the policy definition point or PDP into multiple tiers. This is similar to what we saw happen to policy enforcement during the development of 3-tier datacenters in late 90s. A policy defined at the CSP level will be inherited, extended and enforced at the enterprise IT level and further changed and extended at the end user level. This requirement of the cloud will create a bias in the architecture of a system towards mechanism and policy negotiations.
The policy vs. mechanism was a hot debate in early 90s and looks likely to return once again.
Monday, July 13, 2009
Cloud CPE
Hardly anyone mentions the need for a CPE which IMHO is a requirement for a cloud computing model. Today MSFT announced that their next version of Office will have a free online version. But one of the big features of a cloud is "offline browsing/applications". That is the only way to protect oneself from highly publicized outages at Amazon a few months ago and Rackable a few weeks ago.
CPE that enforces local security policies including authentication & filtering. Offline browsing and metering is a business model requirement in cloud computing.
The other function that is hardly discussed in CC discussions is syndication. I have been trying to add an animation/3D module to Google's online presentation (powerpoint equivalent), but not even google has open plug-in architecture to enable this. Unless folks think that Cloud is just another proprietary application running on the network, this functionality is the key extensibility requirement for a useful CC app.
CPE that enforces local security policies including authentication & filtering. Offline browsing and metering is a business model requirement in cloud computing.
The other function that is hardly discussed in CC discussions is syndication. I have been trying to add an animation/3D module to Google's online presentation (powerpoint equivalent), but not even google has open plug-in architecture to enable this. Unless folks think that Cloud is just another proprietary application running on the network, this functionality is the key extensibility requirement for a useful CC app.
Saturday, June 20, 2009
MMOG is the true Cloud App
MMOG or Massively Multiplayer Online Game is a widely used cloud application that never makes it to any discussion on cloud computing. MMOG is expected to be $9B market by end of this decade with its ground zero in China. WOW (World of Warcraft) which debuted in China reached a peak concurrency of 500K users. These cloud applications have tens of millions of registered users with millions of daily visits. This industry has spawned an ecosystem around the applications with operators called MMOs. As with all technologies they have now introduced open platforms for games.
These gaming platforms have already experienced the issue that business oriented cloud computing platforms and later operators will face.
Appleap.com shows which games/apps are the most popular on Chinese SNS.
These gaming platforms have already experienced the issue that business oriented cloud computing platforms and later operators will face.
Appleap.com shows which games/apps are the most popular on Chinese SNS.
Saturday, May 30, 2009
Servers for Clouds
Three major segments of cloud computing Platform-as-a-Service (PaaS), Infrastructure-as-a-Service (IaaS), Software-as-a-Service(SaaS)
PaaS
force.com
googleapps
longjump, bunjee labs,
PaaS
force.com
googleapps
longjump, bunjee labs,
Wednesday, March 11, 2009
DNS is part of the cloud
With all the automation promised for an ISV in the cloud, there is a need for a service that most of us take for granted. I had blogged about it almost a year ago, but only recently figured out that I was only scrapping the surface of the problem from a cloud perspective.
If seven ISVs use the same cloud, whose DNS service are they going to use? Inside their cloud operation if an IP address is generated for a machine, how does a Java process open a socket on it? You cannot hardcode IP addrs. How can app guy write an application now that could be deployed behind any FQDN at any cloud. Cloud has multiple zones, how will the app developerknow the zone?
All of these issues cannot be solved by using DNSaaS. Some device inside needs to enable this.
If seven ISVs use the same cloud, whose DNS service are they going to use? Inside their cloud operation if an IP address is generated for a machine, how does a Java process open a socket on it? You cannot hardcode IP addrs. How can app guy write an application now that could be deployed behind any FQDN at any cloud. Cloud has multiple zones, how will the app developerknow the zone?
All of these issues cannot be solved by using DNSaaS. Some device inside needs to enable this.
Saturday, January 10, 2009
Cloud Application Programming Model
Most of the definition(s) of cloud computing highlight just one of its dimension i.e. scale of the application delivery. Cloud computing, they say, is about internet scale (millions of simultaneous connections)application access that is hosted at a WAN latency distance. While remote hosting is an important characteristic of the cloud, it is not really, IMHO, the most important one. Back in late 1990s, we experimented with hosting applications remotely. That failed. It failed because the programming model did not evolve to accomodate distribution of functionality across WAN latent connections.
In today's Web 2.0 world, we have new page elements which can be dropped into a page which invoke remotely resident applications. This document management inspired model needs to evolve into a programmatic model for real cloud computing to happen. Document management paradigm is not an evolution from object oriented paradigm that is dominant today. The efforts that went into discovering the most efficient way to migrate object oriented programming to the web got lost in the endless debates on SOAP vs REST, Sync vs. Async, Language vs. Description etc. etc.
What a programmer aiming to write to cloud really wants is a way to import a library (java package for me) which is resident in a SDK that is installed somewhere on the web. This way I can import any java package that is somewhere in the world and access any database that is hosted anywhere in the world and have a class object sitting in my local directory that I load into any JVM on any device.
May be it is time for Sun to create a J2CE (Cloud Edition). J2CE should not require me to download anything other than a Netbeans IDE that has built in well know SDK locations that are resident around the world.
In today's Web 2.0 world, we have new page elements which can be dropped into a page which invoke remotely resident applications. This document management inspired model needs to evolve into a programmatic model for real cloud computing to happen. Document management paradigm is not an evolution from object oriented paradigm that is dominant today. The efforts that went into discovering the most efficient way to migrate object oriented programming to the web got lost in the endless debates on SOAP vs REST, Sync vs. Async, Language vs. Description etc. etc.
What a programmer aiming to write to cloud really wants is a way to import a library (java package for me) which is resident in a SDK that is installed somewhere on the web. This way I can import any java package that is somewhere in the world and access any database that is hosted anywhere in the world and have a class object sitting in my local directory that I load into any JVM on any device.
May be it is time for Sun to create a J2CE (Cloud Edition). J2CE should not require me to download anything other than a Netbeans IDE that has built in well know SDK locations that are resident around the world.
Friday, January 02, 2009
Bungee Jump off Puenta Iglesia over Rio Colorado
My first bungee jump off a bridge in south america over river Colorado. The bridge is 80m over the water and the bungee chord extends to within 20m of the ground.
I checked with AIG, if there is a mishap during a bungee jump, they don't pay your
I checked with AIG, if there is a mishap during a bungee jump, they don't pay your
Sunday, October 05, 2008
Application Streaming
Here is a problem... You want to increase the bandwidth per pin on a chip but power budget dictates that the core cannot be run faster. So you increase the number of cores and buffer to meet the bandwidth per pin. Why is this important to application streaming?
Systems are deal with real-time information do not have the luxury of disk storage. Most of the RT information is computed inside the chip complex and semiconductor technology places the current bottleneck at the pins. To remain real-time, we need to get the information off the chip and on the network on its way to its consumer as fast as possible. If I am a remote consumer of information, the core that is provisioned to me is not the bottleneck but the pins that it drives to get data to my handheld is.
So I have now resigned to the fate that my desktop will be hosted in the cloud and my service provider will charge me for the resolution at which I interact with it. Higher the resolution, higher the charge. I am sure they are not thinking that people will be sharing a remote session on a server like 1970s mainframe. What will make people give up their local desktop is if the desktop is like TV. Consumer buys the screen and the service provider delivers information. Service providers also differentiate among one another through the range of supported peripherals like today's MMPOG (games).
All of this puts the emphasis on application streaming. Moving code to client for execution is not going to fly. To make it usable, I need a thick client and that kills the cloud economics. Remote session is too slow. Streaming looks like the only approach right now. And for it to be useful, the chips need to increase the bandwidth per pin.
Systems are deal with real-time information do not have the luxury of disk storage. Most of the RT information is computed inside the chip complex and semiconductor technology places the current bottleneck at the pins. To remain real-time, we need to get the information off the chip and on the network on its way to its consumer as fast as possible. If I am a remote consumer of information, the core that is provisioned to me is not the bottleneck but the pins that it drives to get data to my handheld is.
So I have now resigned to the fate that my desktop will be hosted in the cloud and my service provider will charge me for the resolution at which I interact with it. Higher the resolution, higher the charge. I am sure they are not thinking that people will be sharing a remote session on a server like 1970s mainframe. What will make people give up their local desktop is if the desktop is like TV. Consumer buys the screen and the service provider delivers information. Service providers also differentiate among one another through the range of supported peripherals like today's MMPOG (games).
All of this puts the emphasis on application streaming. Moving code to client for execution is not going to fly. To make it usable, I need a thick client and that kills the cloud economics. Remote session is too slow. Streaming looks like the only approach right now. And for it to be useful, the chips need to increase the bandwidth per pin.
Tuesday, June 10, 2008
Location Based Services
At Apple's WWDC Jobs said there is a proliferation of LBSes on the mobile. Having a GPS inside the phone provides location as a parameter for all applications. I do not think there is a class of services called LBS. Classes of applications on the mobile generally fall into four categories
1. Sync and Search which are data intensive
2. Mapping which is graphics intensive
3. Social/Sharing which is network intensive
4. Monitoring which is sensor intensive.
Of course there is the voice and other standard media player type phone functions that we already have.
All these categories will benefit from location. But the point is there is no one application called LBS. Also, more services will be delivered to the phone over its wifi network than over its cellular. The phones add on functionalities at a far rapid pace. The carriers simply won't be able to keep up.
1. Sync and Search which are data intensive
2. Mapping which is graphics intensive
3. Social/Sharing which is network intensive
4. Monitoring which is sensor intensive.
Of course there is the voice and other standard media player type phone functions that we already have.
All these categories will benefit from location. But the point is there is no one application called LBS. Also, more services will be delivered to the phone over its wifi network than over its cellular. The phones add on functionalities at a far rapid pace. The carriers simply won't be able to keep up.
Thursday, May 15, 2008
Mobile Applications Framework
If you play with the various mobile platforms out there such as Android, LiMo, S60 etc., the one thing you will notice is that application development frameworks on these platforms are quite immature. One of the big advantages of having a framework is to not have to write custom code to reason with a new resource type. JDBC essentially created the whole application server market. We need something like that for mobile applications as well.
Mobile frameworks are essentially client side libraries that hide the myriads of APIs that are available to access resources over the web.
Mobile frameworks are essentially client side libraries that hide the myriads of APIs that are available to access resources over the web.
Wednesday, April 30, 2008
Home Area Network
Did you know that an average american household contains appliances that use at least six different network protocols everyday. They are
1. Cordless Phone Network
2. Bluetooth Network
3. Cellular Network
4. WiFi Network
5. Cable Network
6. POTs network
This does not even include the many remote controls which use point to point proprietary protocols. Remotes like
1. TV/Audio or Anything IR
2. Garage Door Opener
3. Keyless Entry on Car
And increasingly the devices on these networks are not stateless. When I add someone to my cell phone's addressbook, it does not automatically update the addressbook on the cordless at home or outlook on my laptops and desktop. When I talk over Vonage, something should automagically throttle a mp3 file being downloaded.
There has go to be a switch which can switch control/data over these networks.
1. Cordless Phone Network
2. Bluetooth Network
3. Cellular Network
4. WiFi Network
5. Cable Network
6. POTs network
This does not even include the many remote controls which use point to point proprietary protocols. Remotes like
1. TV/Audio or Anything IR
2. Garage Door Opener
3. Keyless Entry on Car
And increasingly the devices on these networks are not stateless. When I add someone to my cell phone's addressbook, it does not automatically update the addressbook on the cordless at home or outlook on my laptops and desktop. When I talk over Vonage, something should automagically throttle a mp3 file being downloaded.
There has go to be a switch which can switch control/data over these networks.
Sunday, March 16, 2008
Buttons on the Web
The industry keeps searching for the next killer application when it is sitting right in front of us. All we need to do is add functionality to it. The killer application is the Web and it has buttons.
The first one (circa 1995) is called "View". We click on this button and we browse the web, view videos and listen to audio. It is now called Web 1.0.
Then we started to notice a new button begin to appear on the web. With time we could read it as "Edit". Using the functionality provided by this button, we can now edit wikipedia, blogs and podcast ourselves. This button's backend infrastructure is being developed at companies such as Amazon, Google, Cisco (Webex), SF.com etc.
Once again there is a new button that is appearing, but people can't quite read its title just yet. Those who can read it are making frequent rounds to their neigbourhood venture community. Unfortunately, but for a few select ones, they will see it when it is too late. That button is titled "Run". Yes, the same thing as when you press the "windows" key + "r". The coolness of this new button is you can pretty much type anything (assuming an application exists) in the run box and it (the applicaton/service) will appear from somewhere on the web. You can use the application as long or as little as you want and save your results to local disk or a webdisk. All of this as part of subcription from your friendly ISP.
Hope the industry can rally the resources to fund and develop this new infrastructure to support this "run" button. This is where the next Google is going to come from.
The first one (circa 1995) is called "View". We click on this button and we browse the web, view videos and listen to audio. It is now called Web 1.0.
Then we started to notice a new button begin to appear on the web. With time we could read it as "Edit". Using the functionality provided by this button, we can now edit wikipedia, blogs and podcast ourselves. This button's backend infrastructure is being developed at companies such as Amazon, Google, Cisco (Webex), SF.com etc.
Once again there is a new button that is appearing, but people can't quite read its title just yet. Those who can read it are making frequent rounds to their neigbourhood venture community. Unfortunately, but for a few select ones, they will see it when it is too late. That button is titled "Run". Yes, the same thing as when you press the "windows" key + "r". The coolness of this new button is you can pretty much type anything (assuming an application exists) in the run box and it (the applicaton/service) will appear from somewhere on the web. You can use the application as long or as little as you want and save your results to local disk or a webdisk. All of this as part of subcription from your friendly ISP.
Hope the industry can rally the resources to fund and develop this new infrastructure to support this "run" button. This is where the next Google is going to come from.
Sunday, January 13, 2008
Tata Nano Design Should be Open Sourced
While the whole world is focussed on the price of the Tata nano ($2500), I think the biggest innovation coming out of Tata Motors (TTM) is the packaging and delivery that is enabled by a modular design. Nano comes as a package to your door and you can assemble it.
What Tata should be doing is open sourcing the design of the Nano along the lines of GPL. This will spark a revolution in the auto industry with Tata as its spiritual leader. Tata could build a cult of enthusiasts around the world who will mod the base model and add to the open source IP of Nano.
Tata may want to take a few cues from IBM's experience with design of the PC. PC was based on open architecture. It was not IBM that made money on the PC but Microsoft and Intel. What Tata should do is open source the design and make money on the engine and gadget-ware that go with the car.
The last time someone open sourced the design of a manufacturable product, it turned into a ubiqutious entity albeit with notoriety. Yes, I am talking about AK-47 rifle. Tata has the opportunity here to make the world's first open source car. Common Ratan Tata, just do it for India Yaar!
What Tata should be doing is open sourcing the design of the Nano along the lines of GPL. This will spark a revolution in the auto industry with Tata as its spiritual leader. Tata could build a cult of enthusiasts around the world who will mod the base model and add to the open source IP of Nano.
Tata may want to take a few cues from IBM's experience with design of the PC. PC was based on open architecture. It was not IBM that made money on the PC but Microsoft and Intel. What Tata should do is open source the design and make money on the engine and gadget-ware that go with the car.
The last time someone open sourced the design of a manufacturable product, it turned into a ubiqutious entity albeit with notoriety. Yes, I am talking about AK-47 rifle. Tata has the opportunity here to make the world's first open source car. Common Ratan Tata, just do it for India Yaar!
Saturday, January 12, 2008
Theory of Virtualization
IMHO, virtualization is about what resource you want to hold constant and what you want to scale. The most popular version of virtualization is where one fixes the hardware configuration and scales on hosting environment. This type of virtualization done by VMWare/Xen etc. is what most people think of when one says virtualization. We can generalize this to define virtualization types based on which one or two resources one holds constants and which ones one scales.
I think virtualization as a field is function of hardware, hosting environment, collective and application and of course the user (identity).
I think virtualization as a field is function of hardware, hosting environment, collective and application and of course the user (identity).
Virtualization Types | ||
Hold Constant | Scaling Dimension | Type |
Hardware | Hosting Environment | VMWare Type |
Hosting Environment | Hardware | Grid Type |
Hardware & Hosting Environment | Identity | Single Sign On |
Hardware, Hosting Environment, Identity | Applications | Web Application Delivery Type |
Monday, December 04, 2006
Demographics & Computing
It has not slipped my attention that the most valuable companies in technology industry today tend to be culturally oriented (Goog, Apple, RIMM etc). Sometimes also called consumer oriented technology. Contrast this to the most valuable companies of the 90s which were infrastructure oriented. One could justify this trend to reasoning derived from study of economic business cycle which says first there is overinvestment in a trend followed by bust and painful restructuring and another (and longer) boom. Analogies are drawn between railroad construction bubble followed by bust followed by boom. Using this reasoning, the internet industry should have restructured and the next killer application should have been business oriented. Instead what we seeing is the killer application for the internet is "Cultural Networking".
I think the driver is shifting demographics in the world. Rutgers university did some research on the demographics in US, specifically, of people between 18-25 years of age. The research calls this group the "Millenials". SF Chronical earlier this year published a story on this group and their economic behavior. Interesting tidbit from the article is that this generation (approx 70M in strength) is just as large as the boomers (77M) generation. Lot of research has shown that rise of consumption driven economics in the US was due to boomer's purchasing habits. Today, it is these millenials who are driving the consumer economy, I think.
This explains the increasing adoption of electronics networks (on internet) to communicate, form opinion on product and purchase of the same.
I think the driver is shifting demographics in the world. Rutgers university did some research on the demographics in US, specifically, of people between 18-25 years of age. The research calls this group the "Millenials". SF Chronical earlier this year published a story on this group and their economic behavior. Interesting tidbit from the article is that this generation (approx 70M in strength) is just as large as the boomers (77M) generation. Lot of research has shown that rise of consumption driven economics in the US was due to boomer's purchasing habits. Today, it is these millenials who are driving the consumer economy, I think.
This explains the increasing adoption of electronics networks (on internet) to communicate, form opinion on product and purchase of the same.
Saturday, November 11, 2006
Web 2.0 Infrastructure
In one week, I heard the two major networking companies's CEO mention Web 2.0 and SOA - trends normally associated with computing. What makes these digital plumbing companies interested in Web 2.0 and SOA?
As I have been blogging now for the last 4 years, it is all about converting a compute problem into a networking problem. These two trends shift the focus of innovation away from shared APIs and data adaptation which was the focus of the integration business to standards based networking of application components.
The networking companies who so far been content with connecting stateful compute devices now have to start figuring out how to connect a stateless compute device with its state which could be resident anywhere on the network. In other words, they have to move beyond "Can you hear me now?" to "I can be there now (digitally at least)!"
This is going to require a major shift in thinking (and later business model) for networking companies. The CEOs of the networking companies realize this and know that to stay relevant they need to understand the intelligence that resides at the end nodes and how they can serve the communication needs of this increasingly distributed intelligence.
On the other side the computing players who gave birth to SOA and Web 2.0 have to learn how to use the network. Specifically how to overlay the intelligence that they now control on to a network. Doing SOA on a compute platform is DOA. Doing SOA on the network is the future.
As I have been blogging now for the last 4 years, it is all about converting a compute problem into a networking problem. These two trends shift the focus of innovation away from shared APIs and data adaptation which was the focus of the integration business to standards based networking of application components.
The networking companies who so far been content with connecting stateful compute devices now have to start figuring out how to connect a stateless compute device with its state which could be resident anywhere on the network. In other words, they have to move beyond "Can you hear me now?" to "I can be there now (digitally at least)!"
This is going to require a major shift in thinking (and later business model) for networking companies. The CEOs of the networking companies realize this and know that to stay relevant they need to understand the intelligence that resides at the end nodes and how they can serve the communication needs of this increasingly distributed intelligence.
On the other side the computing players who gave birth to SOA and Web 2.0 have to learn how to use the network. Specifically how to overlay the intelligence that they now control on to a network. Doing SOA on a compute platform is DOA. Doing SOA on the network is the future.
Monday, August 14, 2006
Industry Is Climbing a Layer of Abstraction
Where ever you look in the industry, there is a new trend that is driving research and development of the next step in the ladder of abstraction in computer science. You only have to scratch the surface of household buzzwords like SOA, Web 2.0, Grid Computing, Virtualization, On-Demand, Utility Computing and (of course) Application Overlay Networks or AONs to realize that these trends are nothing more than a layer of abstraction over everything that is being done today.
Start with SOA, it is a layer of abstraction on top of currently dominant programming paradigms. At this new layer, one has to independent of Language, the underlying hosting environment, the underlying data model and inter and intra application communication protocol. You are seeing XML driven data models take hold on top of a hosting environment that is supports platform independent interfaces and the interfaces themselves are described in a platform and language independent fashion.
Look at Web 2.0 abstracts the web tooling to a level where the underlying platform becomes the whole internet and not just a datacenter. The run-time of this new trend is a catalog of web services that are exposed on the internet and the client side hosting environment is the browser. Most of the problems in this space actually has to do with the fact that the network programming is still stuck at the socket level of abstraction.
Grid Computing abstracts a unit of computation to a new level where it is totally independent of underlying CPU and IO architecture. Its cousing utility computing tries to do the same on the economics of computing by trying to find a pricing model that actually correlates a customer's SLA with his/her use of the underlying infrastructure.
Virtualization and On-Demand who can claim to be the progenitors of all the subsequent abstraction trends (and actually the only ones making money) aim to abstract into software all the necessary and sufficient characteristics of the underlying hardware. On-Demand actually focusses more on management.
Finally, coming to AONs, this is the new abstraction layer at which tomorrow's network needs to operate at . The article on "The New Network Switch" does a good job of summarizing the AONs which is what Sun's xCEO referred to as the "Big Freakin Web Tone Switch".
Start with SOA, it is a layer of abstraction on top of currently dominant programming paradigms. At this new layer, one has to independent of Language, the underlying hosting environment, the underlying data model and inter and intra application communication protocol. You are seeing XML driven data models take hold on top of a hosting environment that is supports platform independent interfaces and the interfaces themselves are described in a platform and language independent fashion.
Look at Web 2.0 abstracts the web tooling to a level where the underlying platform becomes the whole internet and not just a datacenter. The run-time of this new trend is a catalog of web services that are exposed on the internet and the client side hosting environment is the browser. Most of the problems in this space actually has to do with the fact that the network programming is still stuck at the socket level of abstraction.
Grid Computing abstracts a unit of computation to a new level where it is totally independent of underlying CPU and IO architecture. Its cousing utility computing tries to do the same on the economics of computing by trying to find a pricing model that actually correlates a customer's SLA with his/her use of the underlying infrastructure.
Virtualization and On-Demand who can claim to be the progenitors of all the subsequent abstraction trends (and actually the only ones making money) aim to abstract into software all the necessary and sufficient characteristics of the underlying hardware. On-Demand actually focusses more on management.
Finally, coming to AONs, this is the new abstraction layer at which tomorrow's network needs to operate at . The article on "The New Network Switch" does a good job of summarizing the AONs which is what Sun's xCEO referred to as the "Big Freakin Web Tone Switch".
Thursday, February 09, 2006
Ubiquitous Purchase Order
If you ever get into a conversation which discusses use cases of SOA, then you have probably heard of the Purchase Order (PO). The use case goes something like this, there is a PO which needs to be routed across multiple enterprise components (recently exposed as web services) and this PO needs to find its way to its destination through a maze of business processes. It seems every one has a policy which says "if the PO is greater than some dollars then send it to big boss otherwise the little guy can do it as well".
So abstracting the use case out a bit, we are saying the PO is the input into a black box and some processing occurs and the PO changes the state of the black box and returns a zero or more messages saying so to the originator.
I have yet to hear someone question this use case in context of the vision of SOA enabled datacenter. I always wonder why the PO even has to even exist. After all in the world of SOA, multiple systems are now connected at the application level and supposedly understand the business processes extant in the organization.
POs were created because the buyer and seller had no other way to account for a transaction. PO was the document that implemented the approval process, the actual transaction and the budgeting process. If the underlying IT infrastructure is moving towards loose coupling, the overlaying business processes would need to be reengineered to take advantage of this new infrastructure.
I think the biggest opportunity in SOA might end up in the laps of management consultants. Perhaps, I should have stayed at Booz.Allen ;)
So abstracting the use case out a bit, we are saying the PO is the input into a black box and some processing occurs and the PO changes the state of the black box and returns a zero or more messages saying so to the originator.
I have yet to hear someone question this use case in context of the vision of SOA enabled datacenter. I always wonder why the PO even has to even exist. After all in the world of SOA, multiple systems are now connected at the application level and supposedly understand the business processes extant in the organization.
POs were created because the buyer and seller had no other way to account for a transaction. PO was the document that implemented the approval process, the actual transaction and the budgeting process. If the underlying IT infrastructure is moving towards loose coupling, the overlaying business processes would need to be reengineered to take advantage of this new infrastructure.
I think the biggest opportunity in SOA might end up in the laps of management consultants. Perhaps, I should have stayed at Booz.Allen ;)
SOA Patterns and AON Appliances
There is a lot of discussion around SOA patterns now-a-days. If you have used patterns in your past life, these SOA patterns are not really the same thing. It is not a reusable chunk of code that you can cut & paste. These are big chunks of functionality that needs to be deployed at various points in your network. From an AON perspective, almost every pattern in SOA can become an independent appliance on the network.
When the first appliances hit the market in 1997 timeframe, the pattern they followed was Linux + some server. An AON appliance is a dual plane appliance which has XML for data plane and a control plane which depending upon the pattern that the AON appliance is implementing can be Java or a scripting language or anything else. The performance of course comes from the data plane.
So if we survey the SOA landscape today, we can easily find references to Gateway Pattern, Governance Pattern, Broker Pattern, Router Pattern etc. Each one of these can be made into an appliance in a service oriented network. The only difference between the appliances would be the control plane. You could deploy all these patterns into a collective which some folks call the ESB or you could drop-in appliances at various points in the network to achieve the same result.
Once you have the patterns deployed the challenge shifts to managing the multiple deployed patterns, plan for capacity, scale it and secure it etc. This is where a network based approach with appliances shows its clear advantage. Capacity planning, securing, scaling, sharing etc. are networking's forte.
When the first appliances hit the market in 1997 timeframe, the pattern they followed was Linux + some server. An AON appliance is a dual plane appliance which has XML for data plane and a control plane which depending upon the pattern that the AON appliance is implementing can be Java or a scripting language or anything else. The performance of course comes from the data plane.
So if we survey the SOA landscape today, we can easily find references to Gateway Pattern, Governance Pattern, Broker Pattern, Router Pattern etc. Each one of these can be made into an appliance in a service oriented network. The only difference between the appliances would be the control plane. You could deploy all these patterns into a collective which some folks call the ESB or you could drop-in appliances at various points in the network to achieve the same result.
Once you have the patterns deployed the challenge shifts to managing the multiple deployed patterns, plan for capacity, scale it and secure it etc. This is where a network based approach with appliances shows its clear advantage. Capacity planning, securing, scaling, sharing etc. are networking's forte.
Subscribe to:
Posts (Atom)
Local LLM using Ollama and open-webUI
I have a local server with Nvidia GPUs which I bought off ebay for $800. The GPU are RTX but there are 4 of them in the server. I run ollam...