Boxing clever - or is infrastructure convergence the way to go?

Date: Mon, 01/17/2011 - 21:20

It's all very well for network vendors to go on about the need for a radical network rethink, a converged infrastructure and intelligence built in from the ground up - but has real life business ever gone that way? For every greenfield project that takes off from day one, there are umpteen other businesses that have grown through the messy process of accretion, acquisition, and elimination of bits that fail

Boxing clever - or is infrastructure convergence the way to go? Clive Longbottom, Service Director, Business Process Analysis, Quocirca

Yes, the ideal of the simple, agile and intelligent infrastructure is appealing, but how long would it stay that way before business and technology moved on and bits had to be added? So why not just continue the way we have always done?
There are a lot of new add-on boxes out there offering network acceleration, optimization and management, and the good thing about add-ons is that you can replace them when you want to. This is surely the comfortable way to do business?
Today we go head to head - boxes versus built-in intelligence. And the team includes people who use enterprise networks - whose side are they on when it comes to everyday practice?

Panellists: John McHugh, VP & Chief Marketing Officer, Brocade; AndyBryant, Alliances & Social Media Worldwide, HP Networking; Trevor Dearing, Head of Enterprise Marketing, EMEA, Juniper Networks

Introduced and Chaired by Clive Longbottom, Service Director, Business Process Analysis, Quocirca
My panel of gurus, you've already seen one of them, John on the right here. And then we've got Andy from HP in the middle. And we've got Trevor from Juniper on the end. I've just got a few slides, as in the ones in this [whole way]. I'll go through those quickly then ask a few questions. Then we'll take to the floor. And then we can disappear off on a coach and go and get some food.
So boxing clever, is infrastructure convergence the way to go? So let's get away from unified communication and converging all of the means of communication and collaboration together. And let's look at what we can actually do down at the fabric level, depending on how we define fabric.
So the first way is the stack approach. This is the way that we've tended to do things in the past. So we start with a network. We say great, there we are. We've got a foundation. Let's put something on top of that, so another piece of hardware, being the server. Let's stick an operating system on top of that, an application server, [one on more] applications, and get the whole thing together and off we go, absolutely brilliant. The trouble is, we've got multiple layers, weak links within the whole thing, multiple additions of latency. So everything can get slowed down, which with a lot of things we've done in the past, it wasn't really an issue. But as we're moving forward, it will be.
So the benefits. Okay, it's flexible. As new technologies come in, we just write a new bit of code, just slip it in, start it up, off we go. Everything is fine and dandy. It's scaleable sometimes. If we start to run out of resource, just give it some more. And if we start to run into a position of we get CPU and then I/O constrained, we can start using virtualisation on it. And the big thing here is having mentioned virtualisation.
We can start to use that whole virtualisation to say, actually, we don't need to go through those first steps that I said. We don't need to build everything with all the different layers. We can ship it out. We can stick it into a virtual environment. And there we go, less input required from the user.
But if we take the first approach, that installation setup takes time, people can get it wrong. It can be put into an environment which is not well-tuned for the job that it's mean to be doing because we're sticking it on top of a general operating system. We don't set up the operating system to support that workload specifically. There could be issues around security. Unless we layer various bits of the onion skin around it to say, okay, we've got to make sure that this can't be attacked. We've got to make sure that this is totally secure. There is the need for the management of the total stack, so probably for every 10 software stacks that we have, we've got to have another server, running another application which just looks after these. And is that really the way we want to go?
And then there is resilience. If we have one of these stacks, what happens if any part of it is a weak link and it goes down? Do we need another stack by the side of it?
Are we then looking at that 5% utilisation that we've been running at becomes 2.5% utilisation? All of these, we've got to look at. And latency through that stack can be an issue. We're not looking at the latency of the cloud. We're not looking at network latency. We're looking at pure software latency as we work up and down through the various layers we've put in place.
So then we've got the appliance approach. And two different directions we can go with this. One is an in-line, inbound approach, where it's a case of we have to work at line speed. We have to do this when we're looking at real-time or very near real-time situations on the network. The other one is to take it out of bound, where we can be looking at things which aren't so dependent on time. We can actually move them off.
We can offload them, put them through an appliance which is highly dedicated to what it needs to do.
The other nuances on this is to say, well let's not make it a completely set [put] device.
Let's either aggregate multiple devices together, as we've seen with unified threat management where various organisations have come out with appliances that put many things together, or we can take what a lot of the network player have done and say, let's actually just shoehorn all of this in to the essential big iron at the network level. Let's actually put it into the switch itself, either as a set of discrete parts, or let's write it into the operating system.
So benefits here, single box. Get it out, plug it in. 15 minutes later, you're up and running, the one throat to choke. If anything does go wrong, pick up the phone. Say your box isn't working. What's the problem? Easy setup and implementation. Because it's specific in what it's meant to be doing, it should be able to boot up, ask a few questions and there we are. It can be highly secure. I really emphasise the Word "can." Here we can be looking at highly hardened operating systems. We can be looking at making sure that the security is fit for purpose of what the box is going to be doing. And it can be highly engineered and highly tuned. So we can be looking as we move through, from 100 meg, to gig, to 10 gigs, to 40 gigs, to 100 gigs, that these boxes can deal with that sort of a throughput in real time.
But upgradeability. You've bought a piece of hardware. If that hardware hasn't been engineered correctly. If it's purely using ASICs, if it's using badly installed firmware or code, then everything changes and it's a forklift upgrade. Throw it away. Start all over again. Again, resilience seems to be a recurring theme. If you only have the one box, if you've not managed to build in some means of fail-over on these, this is one weak link, one chain.
Scale-ability. If we do take a box that is made specifically for 1 gig or 10 gig, and all of a sudden we take 40 gig, can we actually put these together? Can we put workloads across the two? Can we balance it all? Or again, is it a case of, no, throw it all out. Pay a whole load more money and bring it back in.
The need for multiple appliances. If we start saying, let's pile everything into one box, completely, if we look at codecs, we look at security, we look at acceleration, WAN acceleration, LAN acceleration, compression, caching, you name it all and stick it into one box, is that a sensible, suitable way to go or is it just piling in more problems?
And that brings in the lifespan of the system. You know, we built data centres to last 10 to 15 years. And as cloud comes in, we're finding that really, they're only set for five years, or thereabouts. We used to buy servers, based on a three to five-year base.
We used to buy switches five to seven years.
And everything is compressing. The whole idea of going for an appliance-based approach is that it extends the lifespan. But will it really do that in the future? And then there are the skills. If we have built a stack based on Linux, or Windows or Unix, we've put it onto a standard box from HP, from IBM, from Dell, then the skills, you can go down to your local shop and buy them. But here, we might have something where it is a completely hardened semi-proprietary operating system with code within it, which you could not in any way point at and say this is standardised. It's only standardised when it touches the outside world.
So where do we get the skills? Or are we beholden? Are we over the barrel with the vendor that sold it to us, that when anything is needed, we have to get on the phone to them? And the first question from them is, "Okay, where is the credit card?"
So that's my point of view. John, you've never been known to sit on a fence at all.
Which direction do you think that people should be looking at?

John McHugh - Brocade
I don't think there is a single answer to that question. Let me use an analogy and talk about fire escapes and furniture. And this is, as you might be in the market to put a building in, there are certain things, like fire escapes, that you expect to be architected into the building and an integral part of the solution that you get. And if you think about the alternative of fire escapes that can be bolted onto the side of the building, or you can choose them independently of building architecture, there are things that shouldn't be bolted on.
On the other hand, there is furniture. There is the layout and interior use space of your building, where thinking about buying that and setting that at the time of
acquisition of the building, have it be permanent throughout the life or one place that you can, for the life of the building, only go to get furniture is ludicrous. It's just beyond thought. And so, to me, there are clearly, when you look at this problem – we kind of throw it up there in specific cases, but when you look at it in specific functional and precise functional questions, then it becomes much more clear. There are things that clearly need to be part of an integrated attack on the problem and should be part of the basic infrastructure. And there are things that should stand apart.
To me, it's classic. The ISO model, the layered approach to IT architecture was defined 30-plus years ago. And frankly, I think it is still absolutely relevant and
appropriate to think about it as things that should be integrated at various levels, versus things that should be independent and allowed to free-float.

Clive Longbottom
So you say there are certain things which should be downloaded, certain things which should be dealt with in other ways. But you haven't picked on anything specifically.
A dear friend of all three of you, [John Shane], who was at Cisco a couple of years back, was saying all codecs should be down at the network level. No reason why they should be above that. I happen to agree with him. What do you think?

John McHugh
About codecs specifically?

Clive Longbottom
Video codecs, voice codecs, that when you're looking at the way the network is going, that to bring any transcoding out of the network into the operating system layer is ludicrous.

John McHugh
You're dealing with a specific fine point. To be honest with you, I actually believe that there is specific value in making sure that the most raw version of something is travelling over the network, that ultimately the network should aspire to very high performance, very broad, complete set of service capability. And hence, the things that are actually flowing through it should be in their most raw, native form so that wherever they get to the other side has the ability to use as much information to ultimately deliver their value. So to be honest with you, the codec problem, I have no problem with, as being an integrated piece of an edge part of the thing. It kind of falls into my fire escape example. But unfortunately, that's a slippery slope. And then intrusion protection seems to slip into a network service that should be integrated, and a lot of things that are clearly application domains. Like I said, I think it's a slippery slope.

Clive Longbottom
Andy, are you a do-it-yourself guy? Or are you an Ikea guy, get it pre-packed and just finish it off? Or are you a go down to a good furniture store and buy it and have it delivered and put in for you?

Andy Bryant - HP Networking
Well I think it's a combination. To go back to that video point, specifically, I think there are good justifications and good reasons for having the network understand certain things about the protocols that are going across it. So for instance, with video, you might have the network routing a mobile-optimised version of a video stream to a mobile device, versus a broadband-optimised version to a halo device. So there is a need for an element of understanding. But obviously the codecs have to be at the endpoints because that is where the device is running at.
So I think the [inaudible] approach that John has talked about, from an architectural point of view, is very important. You have to be able to separate applications from infrastructure, so that you can have independent innovation in applications, not held back and tied into the infrastructure. To some degree, we have to separate that architectural layering in the [OSI] model from the physical boxes because you need flexibility in physical boxes. You need scalability. You might need a stack of boxes.
You might need a blade server full of processing capability for some applications.
And other applications, you might be able to squeeze down onto running on the corner of an ASIC.

Clive Longbottom
Trevor, are you a software stack or an appliance guy?

Trevor Dearing - Juniper Networks
I'm a network guy. So the reality from where I see it is that while there needs to be a lot of integration, you do have to create barriers. So everything that has to be in the network should be in the network. And everything that needs to be on top of the network should be on top of the network. And if you think about the reality of where we're going, and coming back to what Andrew was saying, based on who you are, what application you're running and what device you're running, then the flow of information from you to the information source and back, the network needs to be able to understand your security cost. It needs to be able to understand the quality of service issues. It needs to be able to understand and enforce all of those pieces.
So I guess, coming to John's analogy of fire escapes and furniture, I think another analogy is a hotel, where basically you turn up. And based on your identity and information you give to the receptionist, they give you your card, which that gives you access to certain things around the hotel. It also gives you the right to be able to buy services, which you can then be billed for. It gives you all of those sorts of things.
And you are an over-the-top service in the infrastructure of the hotel.
So I think that while servers and applications themselves should not be part of the network, I think security functions, intrusion prevention, UTM, all of those things should be embedded into the network. Craig used one of my old favourite phrases earlier. But my new favourite phrase is that today's box is tomorrow's service. So what we're doing in separate boxes today will be integrated into the network over the next few years, in the same way we don't [build] Ethernet transceivers anymore.
They're just there.

Clive Longbottom
Okay. But the network itself, you've got the in-organisation core. You've got the outof- organisation service provider core. You've got the cloud core, if you want to call it that. And you've got all the interconnecting between it. So that gives lots of different places where these things can happen. Whose decision should it be as to where it should happen? If you look at somebody like Trend Micro who, for the last 10 years, have been saying that we should be stopping spam. We should be stopping viruses at the wire, as close to the point of origin as possible, so getting rid of the threat at the network at that level. It's never happened. Why hasn't it happened? Where should the decision be made as to where we do these sorts of things.

Trevor Dearing
I think those decisions are ultimately driven by market forces. So coming back to what was said earlier about unified [coms], if a service provider can make money out of that, then I think they will start to do that. I think they've had a challenge with trying to sell the idea of security to end-users because end-users don't really grasp the ins and outs of what they're getting, and are not necessarily willing to pay for it as a service.
But I think in the business world, then if a service provider can deliver the same
service or the same security services as you can create in an enterprise. Our security business for the service provider is as big as our security business for the enterprise.
And they're using the same technology. Tying the two together in a collaborative way, as part of a service, is much easier. And I think that's one of the other things that will happen, is the growth in managed service based environment, for that sort of world.

Clive Longbottom
So Andy, security always seems to be top-of-mind, always seems to be bottom-ofpocket, always seems a difficult one for me. Looking at unified threat management, looking at the various approaches that there have been, you then look at somebody like Intel, having been out and bought MacAfee, a little bit of petty cash out of their pocket, does this seem to be a case of right, let's bring this down into an area where we don't even have to look at it being an appliance. It is going to be down at the silicon. It's something which we don't have to bother about any longer. Or is this still something where it's a case of no, it's got to be built into the switch. It's got to be built into the access point. It's got to be built into the wire. It's got to be everywhere.

Andy Bryant
Ultimately security has to be pervasive because the enterprise is now much more of an open, mixed network with iPhones coming into the network, connecting people together, as well as the corporate network. You can't simply surround yourself with a firewall anymore. You have to embed security within the network, in the most appropriate places. And that could be with an IPS or an IDS within the enterprise within the data centre. Or it could be something provided as a service. But it isn't one or the other. You have to be able to do both.

Clive Longbottom
Slightly different tack now. A lot of the approach in the past has been based around, let's use ASICs in boxes because they're cheap, they're effective for what they do. But as soon as things change, can't move forward. There has been a move towards FPGAs to give a little bit more life to the boxes. And now we're seeing that bringing together processor and FPGAs, Intel having done that with the Atom, having been putting them onto the same board for a period of time but still not putting them on the same socket. And IBM, with the Power PC, also making moves into the same sort of area. Is this something which warrants a great deal of focus? Is it a red herring?
What do you think, John?

John McHugh
I tend to fall into the same point of view as Trevor. Things that are boxes are
ultimately moving towards software. And so this is a classic equation. Where you see boxes is where you can deal with the impact of funnelling a great deal of demand into a single point and then having it propagate back out. In other words, you can live with a chokepoint architecture. And anytime you implement a chokepoint architecture, you've got to throw a lot of money and a lot of cost at building up the complexity of that chokepoint so that it doesn't, in fact, damage the overall performance.
The ideal model for a pervasive service, though, is that they're exactly that. They're pervasive. They don't require the chokepoint. They don't require a proxy architecture.
But the fact that they can be incrementally distributed to the farthest point of the
architecture. And that inherently is one that is going to be best served by a software model that's running on, if you will, thousands or hundreds of thousands of virtualised devices.

Clive Longbottom
But isn't this going to be predicated on what you were saying in your keynote, where it's a case of that storage network fabric has to be able to take these various aspects and be able to provision them anywhere, at any time, in a transparent and completely open manner? And we're not there yet.

John McHugh
We're not there. But you're talking about what is ideal and what direction are we headed? And I think, consistent with the picture I've been trying to paint is that this layered approach, where you're built on standards, and so that those people who are exceptionally creating some specific service level or capability and right now are trapped in a model where they must exist on a proprietary piece of high-performance hardware, simply because of the logistics. Their model doesn't allow them any way to deploy this widely throughout the network.
So the transition I think that we would all be benefited from is a much more open architecture for networks, where customers can actually -- or I should say providers can actually distribute their functionality throughout this open ecosystem without so much constraint around high-performance boxes as hosting environments.

Clive Longbottom
Trevor, your view on that. One of the things we've heard, time and time again today, is about the need for standards, how this gives openness and everything, but that it really is chasing ephemeral -- I was going to say chasing the cloud. But that's using the wrong word. It's chasing this ephemeral thing that it's in no vendor's best interest to be a hundred percent standard because then they're the lowest common denominator. They need to have differentiation. To have differentiation, you need to be outside of the standard, or in the way that you implement the standard and use that standard. How can this be reflected? How can somebody like Juniper be in the position where you can be better than Cisco. You can be better than [inaudible]? You can kick everybody else's ass out there, for want of a better phrase, but still enable this openness.

Trevor Dearing
I think you can turn openness into a virtue. It's very difficult to be the most compliant company in the industry because if other people aren't compliant, then you've got nothing to be compliant with. But I think the openness is not about necessarily being open with your competitors. It's about being open with other organisations. So we do a lot of work with Polycom on integration. So we have opened up our operating system to third-party developers and also developed a Web 2.0 platform, to allow third parties to work into that environment for security, for all sorts of things. So we work with Kaspersky, with [inaudible], with McAfee, with Symantec, all of those guys.
So I think the openness is about creating a network that is a platform, that is easy to then integrate other companies' services into. And I think that's a much better way of being open than necessarily just saying well we comply to RFC, blah, blah because everyone's implementation of RFC is slightly different, as you say. Especially, I think, it's stronger if you look at the voice market, where even though a lot of organisations said, "Well yes, we support H323, but here are our extensions to make it better." And I think we're sort of getting better with SIP and technologies like that. And it makes it much easier for us to then implement SIP and SIP security on the products that we build. So it is better for everyone if there is compliance. But it's how you do it is the difference. And remaining proprietary is not a great way to go, as people have found.
But you have to be open, but in a much more intelligent way than just saying we comply with the RFCs, to be proactively open.

Clive Longbottom
Andy, here we're looking at what we can compress down and to where. If we look at something like SIP, it's a case of it's all in handset now. You just plug it in and in many cases, it will self-provision and there we go. So we've gotten rid of the need for anything like a [mercs] to be able to get various systems together. Somebody can have a Cisco [contact] centre. Somebody else can have a Genesis one. The whole thing should work together when you're looking at [void].
One of the areas where we're still seeing problems at the moment is through videoconferencing and HP with its Halo system, Cisco with its own system, and
struggling to figure out exactly where [tamburg] comes in. You work alongside Polycom already, I believe. But what can we compress down into this so that we
don't need the extra boxes sitting around, to be able to get us -- having paid a quarter of a million to get my telly immersion system. I can actually talk to somebody on the other end without saying to them, "Sorry, you've got the wrong quarter of a million bucks worth of kit there"?

Andy Bryant
I think video is one of those areas that is still developing, especially on the high-end videoconferencing sessions. And as you said, we are working with Polycom and trying to evolve our video such that it can interoperate on a much more open basis.
But I'm afraid that's not really my area.

Clive Longbottom
I'll let you off then. So John, you were saying that today's boxes, tomorrow's software.
It seems to be that yesterday's software was today's box. So when you look at security, we have lots of security software around the place. It all went into hardware. People bought that hardware and went, "It's full of holes, lets things through. We now need to start blocking all of those holes. We have to buy a lot more software. Oh, look, the software is actually better than the box. Let's get rid of the box." So is tomorrow's software the day after's hardware again?

John McHugh
You know, that might be the most complicated and convoluted question that's ever been asked in my career.

Clive Longbottom
I was hoping you'd carry on the iteration.

John McHugh
I don't think I could. That's like two, three levels of parentheses there. I'm not sure I completely agree with the assertion in general. I think there has certainly been some indigestion in the industry over what has been purchased over the past five years, six years. I think a lot of customers, especially in the last two years, applied a new set of economic models to their buying. They set new, higher standards to really demonstrate return on business value in the acquisition process. So in other words, I think there was a run-up to a lot of security engagement and deployment about three or four years ago. I think a lot of customers backed off and said, "I have trouble understanding and justifying the overlay that we're creating, based on the risk that's out there."
I think the challenge for our industry, going forward, is that understanding that security and the application space really have to operate hand-in-glove. And you highlighted the question around the video application and the fundamental challenge of I can buy a system to deploy this capability but somehow I need to make sure all my customers and all my partners have exactly the same solution. Well there is a similar one from a security standpoint that demands that we have, if you will, some kind of standard mechanisms that people can look at trusted partnerships or trusted endpoints and be able to understand what their level of support, what their level of functionality is, in a way that provides that interoperability.
And it doesn't mean that every time you want to create a new business model, it requires a huge investment in IT and run-up. To me, that frankly is a much larger issue right now than the performance characteristics. I think using his techniques, from distributed, virtualised software, which in many more mature deployments can work today, versus appliance-based, customised solutions, which tends to be the way that most first-stage or second-stage solutions have run out. I think performance can be addressed. I think the bigger issue is the one you're bringing up, in terms of how do we get universal connectivity and drive that in a world where many vendors are driving proprietary, locked-in, me-only types of deployments?

Clive Longbottom
I think you something you bring up there reflects back to something that Camille was saying earlier on as well. When you look at the graph of where IT expense actually is and how you've only got a small amount, which is actually focused on investment.
And the problem is, if that investment then has to be replaced in a year, two years, three years time, was it suitable investment. Or does it fall back into the keeping the lights on environment?
So if you're only looking at 20%, 25% on proper IT investment, but that 5% of that, 10% of that is something which is short term, then possibly that's the wrong directly as well. Trevor.

Trevor Dearing
Yes. Can I just pick up a phrase we used earlier? The phrase we used was that today's box is tomorrow's service, not tomorrow's software. So I think it will still be implemented on an appliance, a platform. But the reality is that in the virtualised world, because you can no longer go and point at a device and say that is the firewall that is protecting that server, because that application is now over there. What you have to do is to turn security into a service. And the only way you can really do that is from a purpose-built platform in the network that happens to be running security and delivering that as a virtualised service around the network.
So I think there is no way we can deploy security on general-purpose processors.
We've learned that over the years. So it does still have to be ASIC or programmable [data] rate based or some combination of all of that. And equally, a lot of this technology cannot be just ASIC because things like UTM need a huge amount of memory to execute. Intrusion prevention needs a huge amount of memory to execute,
as well as thing like WAN acceleration, which actually does a lot of stuff that's in common with IPS. So I think it is the fact that it would all sit on a platform and be delivered as a service. And third-party applications can sit on that platform as well.
But it's not and will not be a general purpose process.

Clive Longbottom
I was given the terminate warning about two minutes ago. So I think I'd better open up the floor for any questions that we have from anybody here. Anybody awake?
Come on. There must be at least one question. You've got the best opportunity
you've ever had, with three real luminaries sat up here. Make the most of it. Okay.
Well, if not, then I'll have to ask another question.
When you start looking at it as being a service, again you start to run into the problems of where that service should sit. When you look at Juniper with Junos, you look at Cisco with IOS, the operating systems at the network level are getting rather large. They're getting very complex. They're doing a lot which used to be seen as above the network level. Do you see that Junos is a major part of this? Is it where it should be run? Or is it outside of there? Or is it just a case of yes, we'll have to learn how we deal with the other network operating systems, and other people's routers, switches and everything else?

Trevor Dearing
So the obvious answer is yes, that's where everything should be run.

Clive Longbottom
I thought so, yes, including applications.

Trevor Dearing
I think the reality is that we need, to make the network do all the things that it needs to do and to do its integration, it becomes more and more complex. And we have to take the complexity out. We have to simplify the network, as John was talking about earlier, and make it as physically simple as possible. But what you need, to do that, is automation. And so the big opportunity in this is to try and make the infrastructure as simple as possible and make the automation as powerful as possible. And that all comes down to alliances. And it's really having an application with videoconferencing, like Polycom, driving an automation process in the network that makes the network just react to that.
So on the complication perspective, what you then do is you then take the very basic functions of the operating system and put them in ASICs in the hardware, and then embed a lot of that functionality around the network as much as you can. So it's a combination of simplifying the infrastructure but making the automation as sophisticated as possible, to be able to drive all of that functionality, regardless of servers, whether it's smartphones, whether it's end-users, whether it's security or whether it's switching or routing.

Clive Longbottom
Well this then makes it sound like there is also a need for the datacoms people to start to be able to talk far more to the actual application people and to the business, because when you're looking at automation, it cannot just be network automation. Here you are looking at how best to put together the actual, virtual environment, which is going to support a specific workload. Where is the best place for this workload to actually be at any one time, how we actually provide extra dynamic resource to this, which won't just be network. It will be CPU. It will be storage as well. This gets rather more complex than we're ready for at this stage. How long do you think it's going to be, before we can be there?

Trevor Dearing
I keep referring to what John was saying, which [inaudible], the network really should be agnostic of the application and we know that. But there is almost a midway line.
So you should be able to go, and here is the network. It's commoditised at the edge.
It's sophisticated in the middle. We can forget about that, because what we're actually doing, from an application perspective, is talking to the automation that drives what the network does. So it's not a case of the application people needing to understand what the network does. It's that interface, that automation interface between the two that just means that those decisions should be as simple as possible.
So we want to break that as much as we can, that sort of piece that says, well, these application people who run applications don't notice their network, so they don't work.
It just has to happen. It's like if you've got virtual machines on different parts of the world, how do we do that transition? I agree absolutely again with John on the issues around the storage piece. That is the bit that holds everyone back. But the network shouldn't be the restriction. It should just be there and just work.

Clive Longbottom
So Andy, I'm hoping that you agree with that because I'm just going to throw a curve ball at you on it. It sounds to me like, yes, here we have the network. We've got a high degree of intelligence. But it's going to be neutral in the way it approaches everything. Here we've got the applications, which, hey they're the business logic.
And then we have this abstraction layer in the middle, which seems to me like the chalkboard that we've all seen, lots of complexity, one little box. A miracle happens here. Is this an open-view mercury solution? Or is there something far more complex going on here that we all are waiting for the miracle?

Andy Bryant
Well I think if you look at the problem from purely a networking stat view or look at it purely from a storage stack view, you're covering parts of the situation. What we really need to be doing is thinking of the infrastructure as compute, and storage and network together because applications need some of each. So what you really want to get to a point is where you've got to simplify it, consolidate it, convert infrastructure, to use a marketing phrase, which is service, storage and compute, tied together with operational software that helps to simplify the complexity underneath that. And then applications can pull together chunks of service, storage and network that they need to drive that particular application. So I think it's about simplifying and consolidating the infrastructure, virtualising, and allowing applications to use what they need from that, within that infrastructure.

Clive Longbottom
Okay. Well I think I'll draw it to a close. I think what you've actually seen there is that, yes, it's horses for courses. There is no simple answer, which in my view is that yes, this is right. There are certain things which are better done in software, certain things which are better done in hardware. And as time goes forward, where that resides is going to change. Automation will be the key, particularly when we do start moving towards a cloud-type environment. But I'd just like to thank the three panellists, who, I have been pointed out as well, are all alumni of Nortel. So they are all friends from there, even if they are now in competition in certain areas. So I'd like to thank them very much. Thank you.


valorar este articulo:
Your rating: None

Post new comment

Datos Comentario
The content of this field is kept private and will not be shown publicly.
Datos Comentario
Datos Comentario