Software Defined Networking (SDN) - is it really the future of networking?
Date: Mon, 05/07/2012 - 18:48
ONF? SDN? Yeah, yeah, yeah – has Gartner heard all this before?
Ian Keene’s USA-based colleagues at Gartner, analysts Joe Skorupa and Mark Fabbi were the first to comment on the foundation and aims of the ONF in their independent report Open Networking Foundation Formed; The Battle to Commoditize Network Hardware Begins – so they should know what they are talking about
The good news is that the report gives the ONF’s aspirations the thumbs up, recognizing that: “If OpenFlow gains traction, the core data centre network market could see significant disruption, with dramatically falling equipment prices”.
But note the word “battle” in the report’s title: since when have revolutionaries ever stopped fighting amongst themselves? The ONF was launched by massive data centre operators, with much to gain from open systems and hardware cost reductions, but what attracted the big NEMs to a revolution that promises to butcher their prize cash cow? Could this be a re-run of the ATM Forum scenario, where vendors delayed rather than accelerated the passing of profit-threatening standards?
At NetEvents EMEA Press Summit in Garmisch, Germany, our Gartner guy Ian Keene gives credit where credit is due, but presents our panel with these and other challenging issues around market politics and the need for realism in a movement that had its roots in the ivory towers of academia. They may also suggest alternative solutions.
Industry Saviour versus Devil’s Advocate. Let the debate begin...
Panellists: Craig Easley, Vice President of Marketing and Product Management, Accedian; Marc Lasserre, Director of IP Standards, Alcatel Lucent; Peter Feil, Director, Research and Innovation; Transport and Aggregation Infrastructure; Telekom Innovation Laboratories; Deutsche Telekom; Volker Distelrath, Head of Research; Transport, Aggregation, and Fixed Access, Nokia Siemens Networks; Dan Pitt, Executive Director, Open Network Foundation; Markus Nispel, Chief Technology Strategist, Enterasys
Introduced and Chaired by: Ian Keene, Vice President, Gartner
Good morning, so virtualisation, data centre consolidation. It’s about computing and centralisation is the fashion in IT these days and poor old networking was getting left out. But no, we have OpenFlow now, so we're centralising there too.
It started off, if we talk about software-defined networking really with the data centre because there are a lot of pressures on a data centre. More and more traffic, more servers, but also a requirement for many applications for less latency, for higher speed, for different types of traffic, whether it's Ethernet. Is it [fibre] channels? Is it going through [the thingy] band. And essentially it equals more dollars needed to build more and more data centres and why there aren't, say, bigger and bigger data centres.
So there is a bit of a problem. There's a need in the plumbing or networking. It can be a bit of a headache, and it can be an expensive headache too. So where are we going to?
So just to make it plain, I know Dan's covered this, but traditionally, we have decentralised switches really where the network - the data plane and the control plane are on the same device.
One could argue that's a good thing. It could be very robust. I can withstand a nuclear attack etc. But now we're talking about let's centralise it. And is OpenFlow the answer? Is that really going to make networking within data centres, for example, a much easier, lower cost, faster, quicker to manage and adapt type of network?
Here, as Dan said, we've got a centralised and managed controller. And then below that, you've got OpenFlow data plane switches. And from what I understand, although we'll ask the panel on this, we're talking pretty dumb type of low-cost switches here. So it's a different matter.
It sounds good. It can sound – well, it does sound very promising if you're operating a data centre or running a network. But does it sound so good if you're an equipment vendor supplying the networking equipment?
Let's just look at the date centre switch market for example. And it's more than $6b equipment market every year. And it's a market where vendors have been consolidating and now, for example, we haven't quite got published 2011 market shares, but in 2010 the top three vendors controlled 82% of port shipments. So, it is essentially a lucrative market controlled by a few large vendors.
And what are they going to do? Are they going to sit back and say OpenFlow, software-defined networking, that's the way to go and goodbye $6b market? No. So, it'll be interesting to see how this pans out. And really can we go to a situation where the customer writes the software that controls the switches and the switches are pretty dumb products and everyone's happy. Well, let's wait and see what happens there.
And it's not just the data centre. Dan talked about going beyond that. And one thing of interest I think is carrier networks in total, not just the data centre part. And, again, that's a $13b equipment market for service provider, routers and switches. And the top four have 85% of that market value. And one of those is going to be on the panel.
So how do they react to software-defined networking? Is it the way forward? Or really do we really need those proprietary features for service providers to have the cutting edge?
So what's the future? Does OpenFlow have enough momentum or will proprietary boxes continue to rule? OpenFlow certainly is getting momentum, and it's a flavour of software-defined networking that will go forward.
So a threat or opportunity? And that's what I will talk about when the panel come up.
If the panel would like to come up now, please.
Three of the panel members belong to the Open Network Foundation. And interestingly, three do not. So it'll be interesting to see what they say on this and we'll have some discussion.
I don't know if some of the audience may be a little confused. I certainly started off a little confused. Where does OpenFlow fit into software-defined networking? Is it the essence of software-defined networking? Are there other flavours of it?
Dan, maybe you'd like to start and go forward. How do you see where OpenFlow fits into software-defined networking as a whole? And what is software-defined networking?
So our view on OpenFlow and software-defined networking is that there is a lot more to software-defined networking than OpenFlow, but that OpenFlow is an essential part of software-defined networking because it's now the industry standard for conveying provisioning information and flow table information from the separated control plane to the forwarding plane.
There are alternatives for doing that, but they're point solutions and some of them do not enable the provision of open software above the control plane.
Okay. Volker, do you have an opinion on this? Do you all agree? Or where do you see OpenFlow fitting in?
I think software-defined networking, I would categorise into three areas. One is the capacity demand flexibility to be used. The other one is the [indiscernible] automation. And the third one is the user hardware equipment that you have use flexible, like you have utilised the platform and you can put different ways of how what should be [processed] there on it.
And I think is something what already is available today in many schemes, like [indiscernible], Liquid Net, to put our [indiscernible] together for this is how we can use capacity on different places. And as the OpenFlow gives us additional opportunities to make more networks. Simplification and to access more ways how we do networking is the future in a much more open way.
And there is the big potential to be another disruptive in the way what we have done, we have to do today.
But for an equipment vendor, like Nokia Siemens Networks, is that a good thing? Or do you see your value-add and your intellectual property going down?
I think this is a good thing because it opens up new possibilities to do networking and provide services to our customers in a different way. And we see it more of a gain than as a threat.
That's great. Peter, how do you see OpenFlow fitting into software-defined networking? Is it the way forward?
Yes, I think so. OpenFlow is a very important piece of the software-defined networking architecture, as Dan explained.
It's important because for us, as a network operator, it breaks up those monolithic boxes we have in our networks today. So these are really black boxes, those big routers, and with OpenFlow we can really take the opportunity to break up these things, to decide ourselves which pieces we buy from which vendor. So we may, in the future, hopefully be able to buy the switching equipment modules from one vendor, hopefully very cheap ones. That's one of the goals. And the control part of it could be delivered by some other vendor.
The applications on top of that could be written by our own. That would give us flexibility, also the possibility to differentiate from our competitors. Or we could also buy them from the market. We hope that many companies will pop up who provide solutions on top of this solution, on this software-defined networking architecture, which are then not bound to a specific provider, to a specific vendor. That's our hope.
I sort of agree. And as an equipment vendor, I'd like to position ourselves as one of those providers of the applications on top.
As we move into software-defined networking, there are basic functionalities in the network that are still going to be required: protection, resiliency, performance monitoring and management capabilities. So it's just a new platform on which existing functionality, that exists in some of these other protocols, can now be sold on top of, sort of an application that rides on top of a software-defined networking platform.
So, as a provider of a subset of networking functionality, specifically performance assurance, those are applications that will be needed. And so evolving our delivery platform to take advantage of the software-defined networks and the OpenFlow protocol, we're investing for the future.
Marc, what about you?
Yes, as an equipment vendor as well, I'd like to give some perspective on everything we've heard so far.
Networking devices, whether switches or routers, have been considered as black boxes until now, and that was pretty cool because you could run applications regardless of what the underlying topology was. But there is also value in abstracting the networking layers so that applications can make good use of the network. So as the end brings, these abstraction layers that help applications deal with specific attributes that right now are completely decoupled.
So I think the value of the SDN is really, if you take networking as a black box, then the applications run without knowing what's going on underneath. We can definitely abstract what's happening in the network and have applications specify what we expect from the network.
So, this is where the value of SDN, the overall [unrealised] SDN. OpenFlow is one minor component, I would say, of being able to program flow entries in a switch or a router.
Yes, another equipment vendor, so I don't want to reiterate what everyone said before, so slightly different view on that.
So, for sure, SDN [indiscernible] just one word on that, so we are focused on medium to large enterprise and data centre infrastructures. And we are the only vendor right now who still does use custom flow-based [ASICS], so we are in the flow-based business for at least 20 years now, so quite a bit of experience in that space.
But getting back to SDN itself and our perspective or my perspective, SDN, for sure, the main target here is to automate the network operation and really provide that abstraction layer.
And as Dan pointed out, customers are doing that today. Sometimes proprietaries, so there are mechanisms today to make it work. But every mechanism that is used today is not fully standardised, and OpenFlow would be a key component, from our point of view, to have a standardised abstraction layer component here that is included. So, we see that being very beneficial.
On the other hand, in terms of reducing the cost of your data infrastructure, we see that a little bit differentiated. It really depends on where you're trying to deploy such a solution. And as also mentioned already, there needs to be a certain level of intelligence inside the infrastructure itself, so you cannot extract everything into a unified network OS layer. So that also limits the ability to reduce costs in the data plane, in the forwarding plane as well. And you have to keep that in mind.
And depending on where you deploy it, the reduction is greater or is not that great. So, for example, in the data centre, you might be able to achieve a lot of gains in terms of cost reduction. In a typical enterprise scenario, you might not be able to do that because there the notion of a flow is a typically a very granular notion. So you need to have capacity to manage flows locally on the device as well.
So it's not black and white. It's probably a grey if you are negative, or coloured if you are a positive thinking guy, in terms of what you are going to achieve with OpenFlow and an SDN architecture. And we should clearly distinguish all of these use cases.
Okay. One thing I wasn't clear about, Dan, in your speech is when we see standards, new standards for switching being developed, but what about standards compliance?
What has to happen? Does anything have to happen there? How does a vendor say, look we comply with this OpenFlow standard. And who says yes or no, if that is the case?
Yes, I agree that is important. What the Open Networking Foundation is doing is developing programs in interoperability testing, conformance testing and eventually performance benchmarking, which means we will stipulate the test cases and the testing procedures and the qualification of testing facilities.
We don't intend to do any testing ourselves, nor do we intend to designate as single particular testing agency. We want to foster an open market in conformist testing, provided they're properly sanctioned, and the tests are written by the Open Networking Foundation.
Okay, so compliance testing is one thing that has to be done before software-defined networking products, such as OpenFlow, really hit the mass market. But what else?
Volker, for example, what else needs to be done with software-defined networking, do you think, before it becomes the mainstream switching product that a vendor like yours produces?
I think I see here two things. The first one is that the OpenFlow specification needs to be extended over to circuit switching. At the moment, it's mainly packet-based. And this is also something [indiscernible] planning the working groups to start over the circuit switching specification that you get in a broader way over all layers we have in the network, accessed with the same specification, same protocol standard.
And the second part, of course, is this is not just to implement something new and you cannot re-use the existing legacy installed base. So you need have to something where you can also deploy this protocol, this API, to the existing infrastructure that you can use it also with existing one. Otherwise, you have something new and this will never go in. So this is [indiscernible] criteria we see necessary and we see possible with OpenFlow.
Okay, okay. And from a network operator point of view, Peter, what do you think needs to be done with software-defined networking before large scale deployment?
Well, as software-defined networking, you can do it already today with proprietary solutions. If you go to one vendor, they offer you the whole spectrum that is needed to do SDN.
Regarding OpenFlow, we have been following this development from the very beginning. So I was in charge of the first project, together with Stanford and
Deutsche Telekom and others, which specified the first OpenFlow spec.
So we also did some testing from the very beginning. And we talked a lot with our business units what their requirements are. And number one, number two and number three requirements were IPV6. So this is available now and this was really the main barrier for, for example, our data centre business units not to deploy OpenFlow inside of our company.
But there are other things that are still missing. We are just at the beginning of our development. So what we have available today now is not really carrier grade ready.
So resilience is not available. You can probably do a lot on the software layers, so it doesn't have to be on the hardware layer. This is still missing.
What Volker mentioned, interpretability with legacy systems, we for example are evaluating and also implementing some prototypes in that area, together with partners in some European-funded projects. For example, MPLS interpretability is available.
This is very important because we cannot replace existing networks. We are not able to go for a greenfield approach. So we have our networks and we need an interface between a new OpenFlow cloud and the existing legacy cloud, and this is happening.
So I think these are the most important issues at the moment.
And do you think software-defined control, run on a four year old Dell computer or whatever, is really going to be robust and resilient enough for …
No. Definitely not so. It may be if you have more than one of these Dell computers.
But how many do you need? Do you need the same number as you have data plane switches?
So essentially, are we basically just saying, look we end up with the same switches, but OpenFlow or software-defined networking is another application to put on that switch?
No. So probably for one autonomous system or so, you need a handful of controlelements. But what is also not available today is the communication, the standardised communications, between the different controllers.
What you can do today is really split up the flow spectrum to different controllers.
For example, one controller is responsible for all multi-class traffic, another one is responsible for all uni-class traffic, or one controller responsible for all traffic going into one autonomous system and another for all the rest, or something like that. That is possible.
And just to correct you a little bit in your slide, you said that your slide showed that the control is then put in one place, or central control. That is not needed. You can split up the control elements, as I just explained. But this is, of course, something
where we have to do some more research, some more development.
Okay, so it's certainly, I think, easy to think of it needing to be centralised. But that's good that you brought that out.
Craig, what do you think needs to be done with software-defined networking before it's yet another application on an existing proprietary box?
I think there is no shortage of things that need to be done. So we're at the infancy.
There really hasn't been a lot of discussion about how to take OpenFlow or softwaredefined networks out of a data centre and put it into a WAN environment per se.
There also needs to be some thought on how to capitalise on the hardware, the specialised hardware, that existed which actually is responsible for sending those packets around at light speed.
Certainly, when you move to a software architecture, you get more flexibility, programming ability, but you lose quite a performance aspect. And so there'll be a lot of research required on building those hybrid networks where I can move the control plane around, yet still take advantage of all that special purpose hardware that's been built over the past couple of decades that do a good job at sending packets from point A to point B.
So those are just two that come to mind. How does this play in the WAN space? How are services defined over these? Are we going go reinvent all of the new service architectures? Or will we be able to re-use some of the MEF service definitions, for example?
A lot to be done then.
A lot to be done.
Yes, Marc. What’s your view?
I have to fully agree here with the last speaker. There is a lot to be done. So let's be clear here. You were talking about interoperability testing, compliance testing.
Testing a protocol that does, what is a fairly simple protocol, to program flow entries is one thing. And that's what OpenFlow is all about.
Testing different nodes, because when you say centralised, it's true that a fully centralised controller approach would not scale, would not work. You'd have to, obviously, support a number of controllers. Today, there is nothing specifying how do these controllers talk to each other? Is it a secure channel? Or how do I deal with redundancy? Active active links or active backup. There's plenty of things.
And in the end, those controller modules are using the same routing protocols that were running in those monolithic boxes before. So the same effort that has been done to try to check whether an ISIS implementation was compliant with somebody else will have to be done, whether that stack runs in a controller, external controller or in a router.
So there is plenty of work to be done. I think we're just hitting the surface. And it
will take years and years before being able to move a full routing plane into a control plane, external control plane.
Are you looking at it from the aspect of large carrier networks? Or do you think we could see some data centre solutions before that?
Certainly I think the main application of OpenFlow/SDN, the main use cases are in the data centre space. We don't see from our customers a big need to support.
Supporting an OpenFlow type of API, yes. Supporting an API to provide abstractions network for applications to run more efficiently, yes. But decoupling the control plane from the data plane is not the key piece here. It's mostly things that we need.
Sorry, but I do not agree with that. OpenFlow and software-defined networking is already used today. So OpenFlow, the first use cases I've seen are really in the data centre world. So if you see who's in the part of the ONF, you can guess which companies are already using OpenFlow heavily, extensively in their big data centres.
And we are currently evaluating a solution in the access aggregation area of our networks because we have very complex network architectures. It's really a nightmare to provide to a normal DSL customer or a triple play customer with the services we are offering to millions of customers.
So many, many IT systems are involved in that. And we have proprietary solutions for interconnecting with our OSS/BSS systems. And with OpenFlow, we think that
we can put all this in a kind of cloud and remove the complexity from the existing platforms and also opening up a migration path which allows us, in the future, to rapidly adapt to new needs from our customers. So we are not bound to the existing hardware, which is really hard plant and it gives us much more flexibility. So we think breaking up this is really important.
Right, so better customer service and lower operating costs, potentially.
Well this is basically what I said. I said it's useful to have an open API and to be able to do things on a box instead of using proprietary techniques. That is obviously the main reason for something like OpenFlow.
But to answer your specific question of inter-operability testing, when can people actually move. I want to stress the fact there is plenty of work to be done before this happens.
Right, and Volker you wanted to …
Yes, I would like to add something.
I also disagree with our colleague here. Because I think once you have an OpenFlow enabled network, you have a centralised control plane. And then, of course, you can match easier put in new ways of protocols ordoing something. Even if you utilise on a separate controller, they can run different protocols. What you cannot do with the control plane is the hardware. And if you want to change it in the network, there you have different vendors, multi-vendor environment, which is a typical environment and you want to run it and to change some way of doing something. You have no chance to do it because there are too many different ways this should be done. And you upgrade every single system. You need to have compatibility test again. But if you're centralised, you have a big speed advantage compared to vendors who don't have this centralisation.
Is it also centralising so much actually increasing the risk of something going wrong in the network?
This is normal that you do some [dual homing] or some resilience efforts to say if there's something not going well that you can do it. And also, if you try something new, you can isolate a part of your network and try it first there before you go to a big scale. There are a lot of ways of doing something. And this possibility you don't have if you don't go to this direction.
Let me reply to this. Sorry, to take the mic again, but let's be honest here. Are we moving back to an IBM mainframe style, where everything is centralised in one big box? It's not going to fly. There is a reason why computing got distributed among a set of smaller computing devices, i.e. PCs, right? Today, we use the fact that there is a lot of distribution of processing power.
And when we say a centralised controller, it's a logically centralised controller because it's not going to work otherwise, right? That means it's not so much centralised because, again, we will have … You asked a good question when you said how many controllers will we need? Is it one per device? You know, probably not.
And I think we can certainly aggregate a lot.
But talking about just one piece controlling everything, it's not going to work. We've had experiences in the past with other technologies where it failed. So we know it's not going to work. So we have to decentralise. It's a fact.
We have offline traffic engineering tools today that have a global view of the network, that are capable of doing things from a global standpoint, because when you distribute, you may not have a complete view of the whole network. You have a certain view of the network.
So there is value. But let's be very careful here.
Yes, I think that goes back to my initial comment that we have to really make sure that we distinguish the different use cases because these different use cases might have radically different requirements on the upper layers.
And specifically, on the control plane, I agree that only a distributed control plane will scale to the appropriate levels for most applications. But some applications, if you run a single data centre entity, you might be well along with a single controller architecture. But talking about different networks and different scale, a distributed architecture is key and that's something where we really have to invest in.
And getting back to what is required in terms of scale, what Dan pointed out, you can use a four year old Dell PC to manage a 30,000 user campus in terms of flows. That is true if your flow definition is a very coarse one.
But if you're looking for security applications where you want to be granular, where you do want to go up to the application layers, suddenly the number of flows per end system is multiplied. And you typically see a 10x increase the higher you go into these OSI layers. So if you talk Layer 3, you're probably good with 10 flows per end system. And then you just have 300,000 flows on a large campus, so you're good.
But suddenly you're talking applications, suddenly you're talking 3m or more flows.
And then it's obvious that you cannot do that with a single centralised controller.
And then the distribution starts. And then you have to make sure that there's a proper product volume in place and coordination and orchestration takes place between these different controller entities as well. So, again, it's a balance and it radically depends on the user case and the deployment model.
And getting back to migration strategies, I also think that hybrid approaches, we will see them for quite some time in terms of what kind of control plane functionality can you really centralise using OpenFlow. And for the foreseeable future, at least when it comes to migration, vendors do have to provide hybrid approaches here as well to make the whole thing work.
Greenfield is different. I think we've seen, especially in new data centre build-outs, greenfield deployments that don't need that kind of hybrid approach. But for the bulk of the customers and the deployments, we will see that hybrid approach.
So again, same box, but some extra features on it.
Yes, exactly, so slicing the control plane basically into a traditional control plane and in that portion that is centralised using, for example, OpenFlow to support an SDN architecture.
Okay. We're getting tight on time here. So I'd like to open it up now to the audience if they have any questions. There's one at the back there. We'll get you a mic.
Bob Emmerson, European Editor, TMC Group
I have a question, but first just a very, very quick comment. My quick take on this is it seems to be ideal and designed for the enterprise. And then you talk about taking out into the carrier world, which is a fundamentally different world. And it'll take forever to get it up and running in the carrier world. But then, everything takes forever in the carrier world.
But a more serious question is, when you do it in software, then it's going to run slower. That's inevitable. But I just wondered, one of the panel members said that the network for this running on between the data centres and so on, can be specifically designed for different applications. And I wonder if that counterbalances out the inherent slowdown when you do things in software.
The data path is still the same, right. So in terms of packet forwarded capabilities, you're still shooting packets at wire speed. The control plane might not run slower, depending on which CPU actually the controller is hosted.
So this is not really the key problem. The main drive for this, I think, if you think for OpenFlow/SDN is really the last question you asked. Can we open up some APIs such that applications are aware of what the network is doing and vice versa? Can we have an API such that the network can actually do things to appease applications?
Today those two worlds do not talk to each other.
And those APIs will be very useful because plain vanilla forwarding, as you know it, Layer 2 switching, IP routing works well. OpenFlow/SDN did not happen because we're not switching fast enough. Plane forwarding works really well. It's been proven.
The Internet works. We run everything over IP today. So that was not the key point.
The key point is really, okay, give me an open interface such that I can either play with it and have my own protocols, for instance, or visualise some of the control plane processing, for instance. But really I think the key advantage is that the application talks to the network and vice versa.
And one comment on control plane performance, actually, if you go down to an external X86 model, you might get more control plane performance, based on how switches are designed today. So just to keep that in mind.
Any other questions from the audience?
I may make an additional comment to that. So our vision is, if you will, that in the future, we will have the software-defined network architecture, and on top of that, let the community build these applications.
We have a similar approach in other domains. So why not go for competition also in that area? Open Source is probably an important key to go for that. And we already did some evaluation, for example, of Open Source, freely available BGP routing modules. There is [Qagar], maybe you've heard about that. [Qagar] does BGP routing, and it performs really well.
We did some interoperability testing with existing boxes, Cisco, Juniper and so on.
So these solutions are already there. So we don't need any additional implementation to do that. So the network research community will help a lot in providing additional solutions, things we cannot envision today. So as Dan said at the very beginning, the network research community is really keen to go for this approach. And many, many research institutes are already looking into that. And we hope that we can benefit from it.
You come across, Peter, as much more optimistic in software-defined networking being a reality sooner than a number of the other people on the panel. You're a lot more optimistic on that.
Probably, yes. As I said, in the data centre environments, we have it already today.
And my guess is that we will see it in some enterprise areas. There are a number of startups who are working in that area.
Probably the core carrier networks are, at the moment, not really the first choice to go for OpenFlow. We have highly specialised, high-end machines there, with specialised hardware. And we looked at that also, the performance issues. And currently, we cannot replace them.
But that's not an issue. We have not so many of these boxes in our networks, even if one of these boxes cost more than €10m, that's not much of an issue. We have much more boxes in the aggregation access network, and replacing them or giving them more flexibility with software-defined networking, would help a lot, both on the cost side but also on the flexibility side.
Do you have a date in mind when you say this? Could you see it being a reality in two years, in five years or further down the line?
Well, again, in the data centre environments, we have it today. So we are, at the moment, evaluating, testing some OpenFlow-based solutions for the access aggregation network. I think we need two or three more years until this can really be used on a production network. That's maybe optimistic, two to three years. But we will not be able to replace everything at once. The migration path is very long in carrier networks.
But two to three years when you see it entering main service in certain areas of the network?
Yes. In the enterprise market even earlier I think, yes.
I think you will see carrier announcements this year of worldwide services available this year. And I think, in general, these time horizons are going to shrink because with open software, you have so many more potential players. It might take some organisations a long time, but it's the faster organisations that will survive and will capture market share.
There will be many more ways for carriers to acquire the software needed for these things everybody has mentioned for coordinating up on the OSS and in the control layers.
You're talking about service providers gaining market share through using softwaredefined networking?
Give me an example of that. Would they just be able to provide better service on applications?
Introduction of new services faster.
Oh, introduction of new services faster. Okay.
And more customised services for different customer populations.
Okay. Interesting. Right, so any advances on one year? We've had two to three years, optimistic.
Well I think, it's running in a data centre today. It was just a couple of months ago, the University of Indiana announced a campus-wide OpenFlow implementation. So people are getting their hands around it. And I think it will just continue. At the next meeting, I think there's a plan to take up the, what does this mean? To put it into the WAN and so it'll continue.
Absolutely, it'll show up. It's not going to be a switch. Nobody expects it to just go all OpenFlow tomorrow. But adding OpenFlow-based applications to existing hardware will ease some of the pain points that service providers are feeling, like providing better application performance, providing more flexible provisioning and service creation. I think they are in demand today and, so you'll start to see those applications on the horizon first.
Okay. We're running out of time now. I think we have one more question?
Charles Ferland, BNT Vice President, IBM System Networking Europe
Software-defined networks, is the valued proposition going to be on a lower cost deployment? And are we going to be saving money? Do you guys think that we're going to be saving money deploying a software-defined network versus a traditional network?
Or are the benefits not going to be financial, but really cost savingd in the operation, better management, lower cost of deployment and so on?
I personally think the biggest values are in the automation of network operations and the introduction of new services more effectively and faster. So that's my take on it.
On the hardware side, as said before, I don't see that you can save too much in terms of hardware costs. If you look at typical switches today, 90%-95% of the cost of the access layers, which today is not in the host complex. It's a forward [ASIC] and power supplies, main board and all that stuff. So that doesn't really save you a lot when you're trying to reduce the control plane on these commodities switches.
So it's more operationally and, as Dan mentioned, being faster to market, introducing new services, faster and making yourself different by doing so.
I would confer with this. I think most of the savings will be on the OpEx side rather than the CapEx side. There've been claims that you don't save on the hardware side but, as said before, there's not so many ways of actually building a hardware forwarding engine.
So I don't think we're going to save that much, maybe a little because of the economies of scale. But most of the savings will be on the operational side.
Okay. One last question.
From the floor
Quick question. If not OpenFlow, are there alternatives to consider, particularly from vendors?
Dan wouldn't like an answer in that space. But … I personally think from a vendor perspective, OpenFlow is a viable option. There are other mechanisms today to define software-defined network architecture using [XML Soap] and other protocols. But this, from a standardisation perspective right now, I think OpenFlow has an edge from that perspective.
From my side, I think for us doesn't matter which protocol you have. What matters is that it’s standardised, and we see in OpenFlow a very big equal environment to provide [indiscernible], a very big potential that with this will happen very fast, because all the main players are in and all working on this one. And I think the set-up, how it is set up, also gives very promising.
I think it does hold a lot of promise because you've got the end users in there, in working with the implement matters and addressing some of the holes that exist today. So [XML Soap], CLI, NMS, Netcom, all do a part of the job, but none of them do the whole job.
So I think that, just like the part of MEF's success was involving the vendor community with the service provider community. I think that's certainly in favour of ONF by bringing the whole community together.
Okay. Well, I'd like to thank my panel. It's quite interesting and I've learned something today. So thank you very much, and would you please give them a hand?