Metro Cloud - A growing opportunity for high speed data centre interconnect
Date: Fri, 10/24/2014 - 11:42
MEF SDN & OpenFlow World Congress roundtable, Dusseldorf, Germany
Chaired by: Rick Talbot, Principal Analyst Optical Infrastructure, Current Analysis
James Walker, Advisory Director, MEF Board, Founder & President CloudEthernet Forum, Vice President, Managed Network Services, Tata Communications
Chris Purdy, Chief Technology Officer, CENX
Chris Liou, Vice President of Network Strategy, Infinera
James Armstrong, Executive Vice President, Spirent Communications
Andrew McFazden, Head of Global Marketing, Network Services, Orange Business Services; Chairman, MEF
Well, thank you for coming to participate in this first session Metro Cloud - a growing opportunity for high speed data centre interconnect. So, the context is particularly with Third Network that you have capabilities of taking the Ethernet that you define with Carrier Ethernet 2.0 and giving it, from an operator’s standpoint, maybe considered a kind of an SDN type capability to make it agile.
So, I’ll tell you right now, my perspective, and actually I typed out a notice to give perspective to the panellists. It looks like everyone has it. But it’s just the concept of this is a significant opportunity and it’s really an opportunity for the operators who provide Ethernet services because today a lot of the datacentre interconnection is set up with just full-time private lines. In fact, they often lease their own fibre and put their own equipment on it and run that as capacity which is fine for them, but there are other features, benefits that they could get that service providers normally provide. One of the ways that service providers can do this is to provide services that are specifically geared at them.
Now, when the services were just strictly put up as you would Carrier Ethernet 2.0, it’s a manual process, it takes a long time and so there wasn’t this concept of something absolutely distinct in that function for them and what they were already doing. Whereas on the other hand, if you’re able to put up capacity on demand, then that’s something that their existing equipment and their expertise just can’t do. So, there is this opportunity. So, that opportunity flows to the search provider, but also for the equipment manufacturers, the systems manufacturers, because they would be able to put more value into the equipment.
So, that’s my perspective on it. But we have panellists here who have their own distinct perspectives and what I would like to do is just have like one sentence from each of you to say from your perspective why is this significant of using the carrier Ethernet, the agile carrier Ethernet for datacentre interconnection.
So, Chris Purdy from CENX, why don’t you go ahead?
Sure. I can go back to some previous examples where we’ve talked multiple different datacentre providers that we have worked with and you’ll get the really mature datacentre providers and they are totally comfortable lying pancake waves between all of their datacentres. They then want to do the separate bandwidth for each of the different customers to then get interconnect in which case they’re bypassing the service providers.
Other datacentre operators that we work with are much more concerned because they don’t want to compete with the service providers that they are connecting with. But then they end up with a really significant competitive disadvantage because if ever any of their customers want to get connectivity in their infrastructure, they end up having to wait 90 days or whatever it takes to establish a connection.
So, I think that in the event that the service providers actually do achieve the appropriate level of agility and can set up these connections in near real time instead of long-term, then it does make the decision a lot easier for the datacentre operator to actually make these facilities rather than having to compete with the service providers who are really important customers and partners to those datacentre operators.
We have Chris Liou from Infinera. Chris, what’s your perspective on?
Sure. I can provide a perspective from an equipment provider perspective. When we talk to various different types of customers we have content providers, we have datacentre operators as well as service providers. We definitely see there being a growing interest in opportunity within metro cloud. We have a class of customers who have just tremendous amounts of bandwidth going between the datacentres. From our [survey data], they have backbone for their network service and operations. For them we’ll sell wavelengths or big chunks of bandwidth, or something that they’re very interested in.
But we also talk to the type of datacentre operators who provide continuous services as a part of their offerings and for them smaller sized bandwidth interconnect pipes between datacentres, that maybe shorter duration, are certainly of interest. So, we actually see a mix of different types of requirements for the datacentre interconnect market.
But one thing that is common between them is that the bandwidth pipe will start to increase in size between these datacentres.
We have Andrew McFazden from Orange.
I’m not on your slide, so I was going to pretend to be Paul Gunning from BT.
I’m Andrew McFazden from Orange. So, I’m charge of Global Marketing, the Network Solutions. So, actually when I was first asked if I wanted to join this session I said no because I’m a network guy rather than a datacentre guy. But, no, I’m also the Chairman of the MEF and in all seriousness, today I don’t know whether we coined it but the phrase exists, “No network, no cloud”. I think the marriage and the necessity of having strong, good, agile network interconnecting datacentres is absolutely critical. I think the work that the MEF is doing with the network which is around creating much more automation and agility and therefore speed on interconnecting service provider networks is really helpful in this.
I mean our customers are increasingly demanding. The customers of Orange Business Services are typically enterprise customers. They’ve been quite slow and quite hesitant, in my opinion. Maybe James will have a different opinion. But they’re quite hesitant in moving into the cloud. It’s only really I think in the last 18, 24 months we’ve seen that starting to change because of offers like hybrid cloud. I guess there will be other questions so we can talk about that a bit later. But we’re seeing now the start, I believe, of a real shift to adopt cloud by the enterprise market which means therefore that the capacity and the agility of providing capacity and connectivity into and amongst datacentres becomes an even greater necessity.
We have James Walker with Tata Communications.
So, I’m also another Advisory Director on the MEF Board and I’m the Founder and the President of the CloudEthernet Forum and will [spend] all my spare time then I do Managed Network Services in Tata.
From our point of view, the metro fabric that provides the datacentre connectivity in the city at the end of the link it’s not our primary focus. Our primary focus is the connectivity from city to city across the world, providing a datacentre backbone. So, what we see customers creating is clusters of datacentres in a particular region need to be run together at high speed and then linking those regions together. So, we start from the linking the regions together piece and then the metro fabric piece becomes sort of an add on at the end. It’s the piece that completes the ring. It’s the piece that provides interoperability between two halves of the network into those sort of things.
So, we essentially try to approach this as being the last piece of the puzzle. The customer needs to start off with where am I going to geographically put my datacentres? Where do I need to store my data? What are my fail over scenarios and all that sort of thing and approach that from a quality perspective? Quite often, particularly for large datacentre consumers, they are banks, systems integrators, these kinds of companies that have large amounts of data to store and privacy rules they need to adhere to. So, start off with the global architecture and then the metro piece becomes a relatively simpler piece.
On the question of capacity between those two datacentres within a metro area, it’s all good if what you have is just two datacentres in that metro area. In that case, you can talk about dedicated fibre.
If you start having to have a more complicated datacentre arrangement in that metro area, then sometimes these solutions aren’t appropriate and then often the carrier can step in and add extra value by providing different types of connectivity between different datacentres and different scale, being able to turn up and turn down capacity as required, or even to integrate other services that they will also need to consume by the internet and so forth, making those available out of that fabric.
So, I think in an arrangement where there is purely two datacentres and they are replicating each other, it’s a struggle for the service provider to reproduce themselves into that space. When you consider it as part of an overall global system, or an overall metro system, then I think there is much more facility to be able to do that.
We also have James Armstrong with Spirent.
So I’ve been with Spirent for six months now and I’m the Executive Vice President of what they call Network [Complications] and Infrastructure for Spirent. We test the networks that interconnect with the datacentres and the service provider networks. So, we provide a testing function.
So, in terms of this particular topic I think the service providers will help the smaller datacentres get the kind of scale that they may need in a more flexible and faster way. I think the large, more mature, maybe, as Chris mentioned the term before, volume scales, have the capability of supporting their own interconnect. But the smaller ones, in order to get the scale and so forth, could benefit from having service providers manage the network between the datacentres. So, it allows them to kind of focus on the compute, storage and running the applications on the datacentre.
Once again, I’m Rick Talbot. I’m Principal Analyst of Optical Infrastructure and that’s specifically service provider infrastructure at Current Analysis. So, I focus on, first of all, is the equipment that the service provider might be putting into their networks. My focus is on the optical network equipment. So, things like dense wavelength division multiplexing how far can you go, how many channels can you put on, and then also in packet optical because that’s big in that area. So, it’s switching those platforms and then associated with that is software defined networking because that’s controlling how net switching is occurring. So, what seems like a fairly narrow field has gotten fairly broad and it’s always interesting.
So, we’ve started off with the perspectives. You have my perspectives typed up there. You’ve heard from everyone else. What we’re going to do now is drill down a little bit in the different concepts that we’ve touched on. The order, these are some of the specific questions we will be asking and then, of course, the discussion will go from there. These are the questions that just start the discussion.
So, the concept is starting off with looking at datacentres and saying what’s the opportunity? Why is this interesting to us which is a little bit, starts off with how do they do their business right now and then we’ll say okay, what benefits does their network carrier Ethernet bring to them in light of that. That will be saying, okay, what is in it for the datacentre operators? How does it help them and then how does it help the service providers? Then into some more esoteric topics. What happens is it affects what you’re going to be transporting and how they’re going to want to control it is a difference between protocol used within the datacentre and that interconnecting the datacentres which turns out maybe a little bit different the stuff within it. But if you’re able to connect across two datacentres with the same thing that you have inside, it can make it easier. We’ll talk about that soon.
Then just some other questions, once again drilling into how would this work? It’s one thing to say it would be great to have an agile carrier Ethernet between datacentres, but then if it’s going to be agile, then let’s think about how it will be agile.
We’re not looking at the datacentre. We’re not looking at datacentre equipment, we’re not looking at the equipment that the systems vendors gives you. You’re really looking at the characteristics of how you want to do it as service providers.
Okay, so let’s start with the first question. If anyone wants to address this. Just specifically datacentre interconnection, what kind of opportunity are we looking at with that in marketplace? We haven’t talked about it very much before and here we are. There is several vendors and some service providers talking about it. So, what’s the opportunity there?
I think it depends on what space you’re in as a service provider. If your focus is very much on providing connectivity out to the endpoints, the consumer services provided from that datacentre, then you’re going to have one particular focus and maybe the inside datacentre opportunity is not particularly interesting to you. If you’re more interested in providing the infrastructure that links all those datacentres together, then this becomes absolutely critical.
So, I think it’s different service providers will have different approaches to them. Some are better placed to serve requirements than others. So, you may have some operators have got very dense capability, in particular metro areas or particular national markets, or particular regional markets that might not be able to do the whole level piece. So, everyone is going to have a different focus.
I think that traditionally the market is split into two types of service providers. One that was very enterprise focused and the complex services associated with providing enterprise services - firewall, in-traffic optimisation services, all of that kind of thing - and then sort of dark fibre providers who are there and just providing a transport layer and very little value-added service.
Perhaps in this area we’re starting to talk about a little bit of overlap starting to happen between these two, but they’ve tended to be selected by the datacentre operator or the company providing the datacentre servers based on their level of internal competence at dealing with things. So, if they just wanted a pile of Lego and they want to build their own thing, then they tended to go to a dark fibre provider and the service provider never really got a look in.
Other people have got a part of the components, but they don’t necessarily know what they want to do with them, or they want more work to be done by the service provider that then enhances the value that they offer to their customers. So, they’ve had to be more a service provider.
So, it’s sort of how it has tended to separate out in my experience. So, I don’t know.
Well, maybe I was reading this a similar way. I think from a service provider perspective, actually, probably, we don’t talk about datacentres very much now. We talk about cloud when we’re talking to our customers because I think in the past you would have a discussion with a customer about hosting services. So, you know, a customer would want to outsource and have their kit hosted in a datacentre and we would have a discussion around proximity, security of the physical datacentre etc. and we would provide the connectivity from that customer’s VPN into that datacentre.
I think certainly the discussion on the enterprise side has changed a lot now. It’s much more around do you want to have your services hosted in the cloud. And, so, therefore, to deliver that promise to the end user, clearly, as a service provider, we need to provide all this interconnection between the physical datacentres to make it look like a seamless cloud service. It gets more complicated because service providers like Orange, BT, who have this hybrid approach, this mixed approach where we can either provide connectivity into a customer’s datacentre which some customers still do have their own datacentres, we can provide access and services and connectivity to Orange datacentres. We operate 25 or so datacentres around the world. Or, we can provide access to third party datacentres by using things like carrier Ethernet to extend from our network into third party datacentres.
We signed an agreement with Equinix, so we’re actually connecting with all the Equinix datacentres.
So, I think there is one, and I apologise now because I’m not technical and these are all quite technical questions, but from a sort of marketing viewpoint, as I say, we talk a lot less about datacentres and much more about the cloud today. So, I don’t know whether that helped answer your question.
That really gives some good perspective on it because it comes along the lines of what I have observed. But let me go back and ask maybe in a little bit more generic way. We will get in a second to how they like to have it, this interconnection. Do you have a feel for, when I say the opportunity, one of the ways to look at the opportunity is how much traffic is there? Is it growing? Are they spending more or is it just static and we’re rearranging the pieces on the board? So, do any of you have some feel for what’s the traffic doing and maybe kind of why it is doing what it is doing?
The growth, and I won’t put a number behind it because, you know, we operate globally and it’s different in different markets for different datacentres etc. But, I mean, we’re seeing tremendous growth in this area because of this move to cloud. Particularly, in the large enterprise space we’re now finding that customers are a lot less reluctant to embrace cloud services and therefore things like Office 365, Salesforce.com etc. which we are now starting to provide in this hybrid cloud environment are meaning that the demands for capacity into datacentres is growing significantly.
I think it also requires a lot. I think it is growing tremendously partly because historically each enterprise had their own datacentres and they had a fixed connectivity between them and their branch office, so it was very static. If they wanted to have any additional functionality they would add applications in their datacentres.
But now with the notion of all the shared datacentres that exist with all the cloud services, the enterprise doesn’t own any hubs any more. They actually have to be connected to the outside world and just in the hybrid cloud. So now everyone’s datacentre is at a certain amount at the [cutting] room existing datacentres. But they also want to spin up some storage and whatever in Amazon or in an Orange datacentre and so they’ve got to establish appropriate levels of secure connectivity there as well. So, the connectivity requirements are growing in line with the needs to have greater levels of cloud services that are being deployed.
Chris, I think didn’t Infinera collect some information on how that is actually coming to pass?
There is actually a couple of different technology trends that we’ve been looking at. One is associated with cloud and datacentres. So, obviously, there has been a huge development amongst the technology providers moving, for example, from one gig cards to 10 gig. The chip providers are already looking at 40 gig type technologies. So, the datacentres themselves are becoming a lot more dense. I mean partly it can be attributed to this general evolution of enterprise services into the cloud. The adoption rate is very different between different regions.
Another trend that we are starting to see is maybe not the adoption and deployment NFV, but certainly a huge interest amongst the service provider community for trying to move certain networking functions into the datacentre.
If you look at this combination and this just general trend, one thing that isn’t changing is the size of some of these datacentres right, especially in the metro region where real estate expenses and power etc. could potentially be a limiting factor. But when you start cramming all this technology into the datacentres, what we’re starting to see also as a growing requirement is the networking capability between datacentres. We have some customers who are building their business around these notions of virtual datacentres and what they really are, are just distributed datacentres that look to their end customers as a single virtualised datacentre. So they can provide connectivity to the customer at one of these datacentres and then provide dynamic bandwidth connectivity to a cloud provider, to the internet, whatever it is they do. All that requires increasingly more bandwidth between these datacentres and I think this whole trend towards agility is also potentially creating a new business opportunity for these service providers because instead of creating static connections, between these customers and cloud providers for example, they can start creating bandwidth more as a utility and sell them a 40 gig of Ethernet for weekend back ups as a service rather than having lengthy turn up times and requiring that the circuit stay in service for very long durations.
So, there is a culmination of different technologies that we’re starting to see there that are affecting the landscape.
So, let’s talk about the datacentres themselves just for a second, what we have right now. We’ve heard a little bit about, Chris as you mentioned, this concept of cloud datacentres, these distributed datacentres. At the same time, tell us a little about what some of the large datacentre type people, your Googles and Facebook and Amazon like that, what do those datacentres look like and can you give some perspective on what probably happens to serve this combination of different types.
They’re a little different class of customer than your normal service provider obviously. First, they’ve got the money to build these warehouse-sized datacentres. A lot of them are using those datacentres more as a backbone for their operations.
There is some interesting statistics that our Marketing Department have collected. Every time you generate a kilobyte request on Facebook it generates a thousand times as much network traffic as the original request and all that is new traffic that has been generated in their datacentres because of the way they developed their service.
When you do a Google search, the average search request actually traverses 1,500 miles. It’s crazy. It’s creating this new level of traffic between their private datacentres.
So, it’s slightly different than a normal service provider. In their situation, the bandwidth growth requirement between datacentres is growing at such a fast pace that what we believe is that it’s really a market opportunity with this metro cloud application where you can design optical transport systems that are very much datacentre centric, that are stackable, that can provide them huge amounts of bandwidth that they need.
I don’t necessarily see them potentially as an opportunity for service providers to sell bandwidth on demand just really because of the amounts of bandwidth they are consuming and their growth rates are very tremendous.
Yes, those guys just want waves.
They just want pipes. They’re not looking for the dynamic connectivity. The smaller enterprise that is trying to deploy their three datacentres and they have to get out the connectivity between them but they can’t afford to buy 10 gigs from everywhere. So, they do need adequate secure bandwidth between those datacentres, but they don’t need the higher volumes.
I think it is obvious, if you have Google, their infrastructure, literally they put in their own fibre that they probably have their own network. It is difficult to displace another operator basically. That’s what it is calling for.
It doesn’t remove the opportunity for them to leverage agility at the outer transport layer. It’s just that they are using it for a different application, a different purpose. For example, Microsoft is doing a backup. It’s terabytes of data. So, they’re going to probably swing off the capacity where they need it between their datacentres. They know where it be forecasted their back office events. That’s very different than how a service provider might want to leverage the dynamic nature of networking because their end customers are enterprise customers. It’s not their own back office operations necessarily.
So, we’re still seeing SDN as a theme growing in both the service provider market and in the datacentre markets, but they’re probably using it for different purposes.
Though I tend to have a question of whether they might risk being self-limiting with respect to how much capacity they want. I’m still hearing both stories very strongly. One saying the datacentres are asking for 10 gig, 20 gig, 30 gig like that and then the super massive datacentres that are asking for how many wavelengths at 100 gig. So, we have a little bit all over the map. Let me ask the service providers, kind of going back, what have you seen? Can you reflect on the types of classes that you are seeing in your areas of datacentres and what it looks like that they want in terms of connection?
So, we essentially run, if you like, two separate verticals that address this. So we have what’s called Next Gen which is a service vertical specifically focused on web scale, start [time] providers. So, Google and Facebook and Twitter and Microsoft and all the rest of them, there what we’re selling them is large-scale infrastructure. So, we’re selling them 100G wavelengths and so now we’ve started offering 400G wavelengths plus fibre pairs and all the rest of it. So, we do all that and large amounts of datacentre space for Microsoft recently and so on. So, we’re running with that as a construct.
The major problem in most cases with addressing that type of customer will be the infrastructure the carrier has is that they want a lot less resiliency than you build your network to do, or you build your datacentres to do.
So, you mentioned that each Google requests travels however 1,500 miles or whatever. In fact, every Google request is sent to three separate Google datacentres and then it’s a race condition and whichever one wins that race is the one that then responds to you.
Enterprises don’t operate like that. Enterprises build datacentres and application environments that are built to survive. Google has built a network in a datacentre environment and largely the web scale providers have built an infrastructure that’s built to distribute. So, you can build your datacentres comparatively very cheaply because you can build them like a warehouse. You don’t have to put very much air conditioning in them. You don’t have to put air filtration systems. You don’t have to have massive fire suppression systems and so on because it’s only one of three that is going to respond to the query. Enterprises don’t build environments like that. They have to have a datacentre that is going to survive typhoons, fire, power outages, all that kind of thing. So, it’s massively more expensive to build these sorts of facilities that an Orange or a BT or a Tata would build because we’re serving the enterprise market.
So, it’s not the same thing at all. We will provide to large low scale providers large heavy lifting pieces of typically intercontinental infrastructure. That’s what we do.
For enterprises, we would provide datacentres, backbone capacity, sophisticated traffic engineering capabilities. Those sorts of things are very attractive to enterprises. We need to manage all that. So, it’s two quite different bits of infrastructure.
I would never go out and build a network that was able to serve web scale providers because, first of all, they never buy from you anyway because they build their own facilities. But, secondly, I could not then use those facilities for anything else because they’re user specific designed for these types of companies to operate. So, I tend to, and everybody else, we build our network to be highly survivable, very heavily managed. The datacentres the same way and all that sort of thing.
The other thing is that web scale providers will tend to put their major facilities in the middle of nowhere where it is cheap to do so because the land is cheap, power is cheap, taxation incentives are there. They don’t really need to be staffed. I don’t build those types of facilities. We build very highly reliable, highly manned locations, lots of expertise on site and all that kind of thing and that typically means we have to build them close to where the customers actually operate their business, which is in a major metro facility.
So, it’s two utterly, utterly, utterly different types of infrastructure and certainly, as far as the big players are concerned, it’s never an infrastructure that I will try and emulate. It’s not my business and this is in fact why, continuing one of my various rants on the subject, SDN is not yet appropriate for the types of things that major carriers like Orange and Tata and BT and so on do because it is designed to operate within a very closely densely tied together datacentre estate with one customer operating over infrastructure you probably own anyway. So, you’re the only customer of this infrastructure. It’s dynamic and all those sorts of things. It’s quite a small geographic area. Our problem is we’re trying to operate a network that has to work all the time. If it doesn’t work, in some cases people potentially could be at risk of dying. If a Google query doesn’t get cleared, nobody dies usually. If somebody can’t phone the ambulance services in France, then the chances are that somebody will die. So, we can’t have experimental technologies. We can’t have the concept of a single customer within our network. Orange has got far more customers than us, but one or two [nippy] international circuits I need to manage from every single one of those customers is a different expectation in terms of availability, reliability, survivability, payload, faster services, quality of service, all those kinds of things. There is no mechanism yet that will allow us to do that.
So, I think it’s important to keep these, to sum up a long rant, it’s important to keep these two concepts quite separate because they’re very, very, very, very different things. It is simply not appropriate to consider a Google, a Facebook, and all that sort of thing in the same breath. That’s a total communications operator because they’re totally different things doing totally different things.
I will be like a politician; I will speak without your question. Just adding to this, what we’ve seen over recent years, and it kind of links back to what I said in an answer to the previous question, is that also we’re seeing a change in the nature of the kind of connectivity with a datacentre. I’m talking here about the enterprise, the style of datacentre that James was just explaining.
In the past, an enterprise customer who wanted to either have co-location space or whatever, a secure space in a datacentre, would have his own dedicated access into that datacentre. So, we provide from the VPN a physical connection purely to that space in the datacentre dedicated to that customer.
Now, what we’re seeing, we are launching services where we’re using much more shared infrastructure. So, we have this product which we call VPN Galerie. So, again, we install a single, very large rooter in a datacentre but we give access into that datacentre to any of the Orange customers. So, they’re using purely shared infrastructure. So, for the customer, it’s much more flexible, it’s much more cost-efficient because they’re not having to have their own dedicated bandwidth. They use shared bandwidth.
Obviously, and it does kind of tie back to your question, it’s a quantity point. The size of the pipe that we are installing is much, much bigger. It’s not much bigger in terms of the collective bandwidth because previously we had 100 pipes and now we’ve got one, or two if we’ve got backup. So, the size of the pipe has got bigger but actually, as well, I mean it has got bigger because the demand for these services is growing. But, as I say, really what we’re seeing is this move to much more shared infrastructure, shared capacity into the datacentre which again comes back to looking at things like, I’m very old as you can see, I still refer to it as network management. We talk about SDN. We’re talking about network management. We need to manage the network and the way the network allocates capacity from one end to the other and that’s really what this is about here. It’s about when you’re using a shared resource, a shared access into the datacentre, you need to make sure that any of the customers that have subscribed to the service can get access to that application whenever they want it.
As I say, it wasn’t the question you asked, but.
I think both of you brought up was for datacentre operators that are not the very, very large Facebooks that you see that you’re able to provide more types of services over shared facility where it is not just connecting to one of the applications. You’re actually providing access to the datacentre rather than just strictly connecting to another datacentre. That’s something important to keep in mind with respect to service providers. It still leaves us the question then of for these smaller datacentres, what kind of opportunities we have. The first part of it I’m curious about is are they just sprinkled throughout a metro area, or do some of these smaller datacentres do datacentre interconnections with datacentres that are far away, that are over a long haul network. How do you all see that?
I’ll go first. If you use us as a case in point, and, as I say I have to honest with you I don’t know the number so maybe I ought to check and come back and tell you. Let’s say, for example, we operate 20 datacentres around the world. Frankly, datacentre isn’t at the core, a major part of our business. We provide datacentre services because it’s something that our managed services customers want. But, you know, those datacentres, with the exception of the gigantic ones in France and Spain and Poland where we’re the domestic operator, largely, those datacentres elsewhere around the world are probably what you would term small. They’re either a floor in a third party datacentre, or maybe they’re our own physical datacentre.
But the reason they are like they are, whether they’re in Brazil or in Johannesburg or whatever, is proximity to the customer base because again, although with the technology that we have today and the amount of capacity we have today, even though there is still a question of latency and therefore proximity to the type of customer that you’re looking to serve.
But, in addition though, those customers, as James has already explained at length, the kind of customers that we’re serving are looking for resiliency, redundancy. So, even though we may have a datacentre in Johannesburg, the kind of customers that we’re providing services to from that datacentre will need to have a disaster recovery plan in the event that that Johannesburg datacentre gets blown up or burnt down. So, clearly, as part of our network fabric, we need to interconnect our datacentres and it may be that the redundancy plan for the Johannesburg datacentre could be another Orange datacentre in Nairobi or somewhere else, or it could be a third party datacentre where we rebuild a mirror image of the incidence in the third party datacentre and connect it to that.
So, we do operate, certainly I would not call them small because you may write that in an article, we do operate these distributed datacentres for proximity reasons.
I think if you look at the datacentre operators, I mean Equinix, for example, does connect together datacentres in a market and will do that themselves. Over that they’ve created various situations of a market fabric which originally was interconnecting internet service providers and then it was attempting to do [what also (inaudible) started off by doing so providing interconnection into (inaudible) operators focused around Ethernet services. So, Ethernet exchanges and so on and now is a fabric to interconnect cloud service providers with enterprises and (inaudible) and all that sort of thing.
So, they will do that in a market and they may have up to five or seven datacentres in an individual market. But they not at the moment building infrastructure between datacentres for the reason that, as Chris said earlier, that they see that as competing against their board market and they don’t want to do that.
Other people don’t seem to have that worry. So, digital reality, for example, used to be just here’s my lovely building and then you need to do, if you want to take space in it, you take a floor and you fit it out with all your own generators and all of your own chillers and everything. It’s more like it’s a real estate game, as the name suggests. They are creating smaller units that they will sell but they cannot provide connectivity between those datacentres. But today we don’t see a huge take up of that kind of thing.
Then as some people in the real estate industry do, connectivity which brings some groups of datacentres but not necessarily overall. I think the reason for that is that it is quite a risky sort of thing. You may find that there is tons of requirement between two markets and not tons of requirements between another set of markets. But, because you’re providing an overall product or service, you have to have it in place everywhere and that becomes quite an overhead for them to operate and run if people are not actually consuming it.
Also, if go down the track that says oh, by the way, I’m just going to use it when I need it, I’m not going to pay before it in a fixed kind of way, then that capital expenditure and that ongoing operating cost becomes risky and unattractive because you don’t know when you’re going to get a return on it because you might do fantastically well one weekend where everybody needs to play on something over the net and then have two months when nobody wants any capacity.
So, we don’t see a massive rush of datacentre operators towards providing connectivity. The only exception I have seen is ones who are in geographically inconvenient places. So, Iceland, for example. A lot of people don’t want to build a new out to Iceland, so they’ll tend to say you can get to us from London or Amsterdam or something like that and that won’t actually be capacity that’s owned and operated by the datacentre operator. So, I see that the reasons or facilitation access and facilitating utility of the datacentre as opposed to actually being a network connectivity service as such.
Yes, that’s a real range. It’s amazing. You get the certain datacentre operators who are just space and power, but all of them are seeing more of the Equinix model. So, the first thing that they need to do with connectivity services is they do fibre panels and they charge an MRC associated with that. So, that’s the first step, but they still don’t have any virtual connectivity that takes place. They’re not setting up a connectivity on a [LAN] to physical fibre. As soon as they’ve got to go within a market and they’re wanting to get complicated they can afford to run fibres from a facility in one location to a facility in any location so then they actually have to put in some kind of a switching capability. Then they have to, as James said, they have to have the buy-in capacity between those datacentres from somebody and if they buy it and they don’t really know how much it is going to get used, there is a significant expense associated with buying those things in the hope that enterprise, their customers are going to use it.
So, I think it less likely that the datacentre operators are going to actually start, from a business model perspective, are really going to get connectivity. They also have a big risk. If they start competing with the service providers, then the rest of the service providers will pull out of their datacentre and if they do, their datacentre becomes disconnected. If their datacentre is disconnected, it’s not very valuable space.
I certainly see both. Even when you interconnect within a market, I mean take a look at Coresite where we orchestrated the connectivity that they can set up on demand from any location within that market, even then when you’re going out to market, they often span cities. They usually try to keep them within a city, but as soon as they start expanding across it, now they’ve cut out the service provider and then they have some risk of competing with their partners.
There is also a small practical consideration which is if you’ve constructed taxation (inaudible) as a route in the US, you can only show a certain amount of income that comes from your services which are all related directly to real estate investment. So, your real estate investment trust as a route. There is a very attractive taxation structure that is built around that and there is a reduction in overheads and costs and all that sort of thing. But if you’re a real estate trust, the primary source of revenue you have to get these from real estate services. As soon as you start going off and charging for connectivity, then that eats into your credibility as being a primarily real estate focused venture and so that’s why it is impossible for anybody who wants to take advantage of that taxation structure to offer too many extra additional services.
But because of the lack of the third network, because of a lack of the ability in the service providers to set up connectivity in an agile way, you put the datacentre operators in this competitive channel. So, an enterprise comes to you, the datacentre operator, and let’s say you’re Equinix and you have actually got three datacentres in the region and you’ve actually bought capacity between them. You can, in real time, set up connectivity from anywhere to anywhere in that market. You have that kind of agility. You’ve built the provision. You’ve better orchestrated it. Then you’ve got that infrastructure available.
Now, then Chris comes to you, you get the [servers online]. Put one datacentre here, we’ll put another datacentre here, put another datacentre on a third thing. In real time I can establish the connectivity from anywhere to anywhere. I can also connect you into Amazon and cloud because they’re an occasion. Then I can connect you to Salesforce with guaranteed quality so that they then have this differentiated offer that they have.
Now, by contrast, if another datacentre operator comes in and decides that they’re going to rely on being a service provider to provide all connectivity that takes place between these datacentres, they can’t do that on demand anymore because they have to place an order to the service provider to establish that connection that takes place. Considering that that’s a 30 to 90 day exercise depending on what we’re talking about here, that can mean a really significant disadvantage. So, they’re in this battle either competing with their partners or hoping that the partners will actually become more agile so that they can actively participate while competing with the ones who have bought the infrastructure.
Okay, so you’re saying to some degree they would rather not have the risk of having more capacity than they really need, so they always have it because they want it?
I think one of the things James said that I am quite not agreeing with, I don’t think datacentre operators want to become telecom service providers that are doing connectivity if they can help it. That’s another big complex hard business and I don’t think most of them want to become that. They would rather specialise in datacentres and connectivity of the datacentre and cloud services that they’re in to provide within that and would rather rely on somebody who provides the connectivity for them. But if they can’t get the agility that they need from those service providers, and they find themselves competing with others who have paid for the infrastructure and have that agility, then it’s a difficult situation that they have to balance.
It just strikes me as an opportunity for the service providers, or those types of datacentres that are having to face that decision to some degree, they’re facing the decisions because at least they can perceive that their choice was right or not.
Yes. Although again it’s probably a bit off the track a little bit, one of the things in this is personal world and official world. One of the things that I still have some question marks against is this whole notion of on demand, or on demand services. You know, do customers really want truly on demand? Again, in the enterprise environment it’s an interesting discussion. Clearly, there are some sectors that tend to require it. I think rather more, the customers don’t really. Most of what they do is actually quite predictable. They don’t have massive surges of requirements for very short periods of time because the other question linked to on demand service is pay as you go and usage based which is an interesting discussion I often have with customers which is okay, you’re saying that you want to just pay for what you consume. But, if you flip that around, that means that you’re going to be sitting there on the first of the month wondering how large your bill is going to be that arrives on your doormat. As a CIO, Head of IT of whatever, do you want that? Or, do you want to understand and have predictability of a monthly recurring fee because you’re paying for a fixed level of service and at least you know this year it is going to cost me a million or whatever.
As I say, often, when you have that discussion with the enterprise customers they come back and say well actually, no, we like the fixed monthly model. We don’t want a whole load of usage-based services that are fluctuating all over the place that we can’t control the budget for.
I think it is a question of how agile is enough.
Because I think while enterprises, I agree that they have got some level of predictability in their bandwidth. I don’t foresee at any time soon that an underlying infrastructure needs to turn up and turn down the perception or anything. But enterprises do put in a branch office. Enterprises do decide that they need to get some additional capacity for their content delivery network and movement to their other datacentre and those changes, how long that takes does matter. So, whether it’s 90 days, that’s too long in my opinion.
I would agree.
The value of going from 90 days to 30 days is huge. To go from 30 days to one day is still valuable. But to go from one day to one minute.
Yes, I agree fully with you. As an industry, absolutely. I’ve had a very short interview outside and the chap said to me how would you measure progress and I said well, it’s quite easy. If you take the starting point today being typically 60 or 90 days, the target being near real time, you can actually see how easy it is to track progress. I mean there is lots of things that we can do and are doing collectively to improve the lead-time to connect as we call it.
I think, for example, it’s almost like real estate. If you do a long-term lease, you are going to get a lot more costs per day than if you go to a hotel and do a one-day lease. So, they want to balance. But it depends if you can predict exactly what is going to happen.
So, it’s probably a company that can only do 90 days or 60 days today but offers a little lower cost. They’re competing with somebody who can do it in five days but has a little higher cost, then that more agile one is just going to get better margins and be more capable. Ideally, in addition, that agility is not achieved through hiring more people. It’s achieved through appropriate levels of automation. So that actually could reduce costs in parallel with providing that type of agility.
We’re done to our last five minutes. Let me see if the press have any questions for the panellists.
It may not necessarily be that the enterprises are looking for bandwidth on demand, but to the service provider, if they can offer a service to the market that’s quicker to deliver than their competitors and has a competitive advantage, the agility to help them win more business. So, I think that’s what a lot of them want. At least the customers we talk to are interested in the agility we can provide so they can reduce that provisioning cycle from 90 days to maybe one day, or they can offer 10 to 100 gigabyte of Ethernet in 10 days. That’s a great selling point for the customer that says, no I’m going to choose between this one and this one. I can get this one in 10 days or it can be 90 days.
I’m sorry for cutting you off there. I suddenly realised halfway through talking about it, that’s one of the main topics I think session two deals with. So, we’re answering all those questions now we’re going to have a problem when we get session two.
You’ll be thoroughly warmed up. Well, thank you very much. Are you sure there are no questions from any of the press?
One of the major rivals behind the [high speed datacentre interconnect market] enterprise market is more the carrier side or the service provider which has many services or looking at the enterprises?
My personal, I don’t think it will be the enterprise. I think it will be a mixture between what I would call sort of wholesale providers, so, real infrastructure providers, and service providers.
I mean today most of the service providers (multiple speakers). I mean in terms of the infrastructure. If you look at Orange, I mean in France obviously Orange owns the physical infrastructure. We own the fibre in the ground. But if you look at the US, for example, where we’ve got let’s say we’ve got three datacentres, we will provide the interconnection between those datacentres as part of our service, but we’ll have to go and get the physical infrastructure from a local carrier, whether it’s Comcast, AT&T, Verizon, etc. So, I think the provision of the fibre will be those infrastructure carriers because they are the people.
(Inaudible)..from one company so they have a lot of money because they have been doing this for datacentre interconnections.
I wouldn’t say so. It depends again. We’re selling the service to the enterprise customer which would include access to, or as part of the VPN, access into the datacentre. So, we’re providing service over the top of the cost of the raw infrastructure. So, we’re providing the managed service. We’re providing security, network optimisation etc. as other services which is where we typically are making our margins from and making our business from rather than actually the cost of the raw bandwidth which is what we would then buy in that case from a provider.
A typical carrier lost money (inaudible).
That is an [integral] market which is why, as I say. I think there are some players that are in that business. Typically the service providers aren’t. They may be in their home market. When I say service providers, I’m talking about carriers like Orange. The margins and the opportunities in the services are wrapped around rather than the pure bandwidth picture.
I think most carriers that own fibre assets today are not in the process of selling those off because if you take the German market as an example, the ability for new fibre to be done is very, very small because all of these city councils and all those sort of things don’t like having their roads dug up. They have all sorts of restrictions around doing that sort of thing. So, the ability to lay new fibre, new data, and all those other things are extremely limited. In fact, specifically in Germany it’s almost down to the point, from a regulation perspective, where it is almost impossible to dig new fibre through. So, anybody who has fibre and duct is tending to hang onto their fibre and duct rather than wholesale getting rid of it to other people. There are a couple of exceptions, but they are very, very rare.
So, most service providers like Orange or BT and so on have fibre assets, but then they build value-added services and business on top. They don’t sell large capacity fibre infrastructure in metro regions except to their home markets and in certain circumstances and all that sort of thing. There is a small set of service providers whose business is fibre in which case they typically don’t have all value-added services. They’ve not a large internet service provider. They’re not a large VPN service provider. They are not any of those sorts of things. They’re basically an infrastructure provider and they get sold into specific examples.
So Deutsche Bourse, when it connects together its trading floors and so on will take fibre from somebody like an EU Networks. They’re not going to go to in [DTAC] and buy fibre. That’s not the business of the [DTAC]. [DTAC] as in - a completely and utterly different business.
The same case in France and the same case in the UK and Germany so on. They will go to someone like (inaudible) to buy fibre infrastructure and then they will go to a service provider like AT&T or Verizon or somebody providing services. These are two completely different things. So, you can’t say that we’ve lost out on money by moving it from one to the other because in most cases we didn’t provide the other thing in the first place anyway.
Elena Varganich (Networks & Telecommunications)
I had a quick question for James. Am I understanding right when you say that SDN is not ready for deployment with Google and Vodafone or for other enterprises?
So, I think what I would say is that where it is constrained geographically, so, where it doesn’t have to cover a big area and it’s serving a small customer. So, that might mean enterprise who has a couple of datacentres within a particular market or something like that and they need to provision things and move services around between those two datacentres and SDN is part of it. That’s something that is deployed. It is known. It is understood.
What I’m saying is when you take SDN in the carrier environment the size of a Tata or BT of somebody like that, Orange and so on, and scale down to the local network, it’s a completely different set of challenges. Moving things around inside a datacentre and moving things around inside a massive network which people are hugely relying on is two different things and SDN is not.
In our opinion, onto maturity where it is able to do that, it is appropriate for somebody like a Google or a Facebook or whatever because the services aren’t built to fail. So a vendor is considered already and the services are not. Despite the fact that you may be very unhappy when you can’t do a Google search, it is not going to be the end of the world. If Tata network goes down and 20% of the world’s international voice disappears and 785 operators can’t roam and 21% of the world’s internet route has gone down, that will be bad.
I think one thing that is important though is when James is defining SDN, he is defining it as very much the open flow based SDN controller. Sometimes when other people are talking about SDN, they’re talking about a much bigger thing which includes NVP, SDN. It includes administration and so forth. In which case there might be certain applications that apply to lots of different providers. It really depends on how you define SDN. The SDN that originated in the datacentres of the big guys to accomplish that same thing in the broader market, I think I agree with James. There is still a ways out. But, what has happened is the term SDN has often been expanded to mean network management and a lot of other things. So, in that case it’s a different conversation. So, it’s important.
Well, thank you very much for your participation. Hopefully you’ve learned a lot.
Above, Rick Talbot at MEF SDN & OpenFlow World Congress roundtable
PHOTO / telecomkh.com