Ronny Lam

about://tech

Cloud9 IDE on Google Compute Engine

I have always had a love-hate relationship with Cloud9. Having an IDE in the cloud sounds very cool, but in my opinion it needs to have equal or even more performance than a laptop or server. This is where we went wrong in the past. Building Ruby on Rails apps was not a lot faster than doing it on a RaspberryPi. Of course this is all good news for the Pi, but very bad for Cloud9. With their latest announcement

Cloud9 built support for Compute Engine into the backend of the soon-to-be-released major update of Cloud9 IDE! We’ve seen major improvements in speed, provisioning and the ability to automate deployments and management of our infrastructure.

Cloud9 and Google Compute Engine

We’ve optimized our architecture to require just one hop between the hosted workspace and the browser running Cloud9. This intermediate layer is our virtual file system server (VFS). VFS connects to the hosted workspaces and provides a REST & WebSocket interface to the client running in the browser.

This new update is expected to be released this quarter. So I can’t wait to give this a try when it is released.

Cisco ACI or Insieme Presentation

Today I attended a presentation of Cisco OnePK and ACI at Cisco Netherlands. The first is already widespread known which was lucky because the presentation was lacking the technical detail I was looking for. Most part of the day was spent on Cisco ACI which was very interesting and besides some little doubts I was very much impressed.

You can read a very good review from John Herbert or read the detail and watch the video on the Cisco site.

In my view Cisco is getting on par or even a little beyond Arista with this launch. Performance looks great in the slideware and link resiliency is very fast. One of the interesting things was the introduction of 40GB optics that can use the Multi-Mode fiber which you are using for your current 10GB links. Pricing is no more than 10% more expensive.

The ACI infrastructure can connect anything to anything, whether it is VXLAN to VLAN or subnet to subnet. The ACI is stripping everything of and is connecting applications based on policies. This is potentially very cool.

Cisco ACI

ACI is based on a leaf-spine fully meshed fabric which is fixed in architecture, but with a variable box-count, in total called a fabric. 1, 3 up till 31 controllers, called APIC, are connected to one or more leafs and are being used to store policies and distribute those to the devices. The bad thing is that this is per fabric en thus if you have multiple datacenters you have different fabrics with different controllers. Moving compute power within a datacenter is no problem, but when moving it to a different datacenter you run into the same problems as today.

Again, I am impressed what Insieme as a spin-in from Cisco delivered and I am looking forward how Cisco will position it and how the market will use it. Because in the end, this is not your day-to-day solution.

Wide Area SDN: Close, But…

This article is arguing that SDN on the WAN is still not there yet and it is naming some products and solutions that might be able to fill the gap. But SDN on the WAN is only half of the solution.

To what, by the way, but that is a different discussion. I can argue that current protocols like GMPLS (thanks @mbushong) and PCEP are already filling that gap in the WAN. Depending on the definition they are very close to SDN.

But this is still only half of the solution. What you want in the end is inter-AS-SDN. Especially for delivering global VPN’s and services from OTT-providers. If you need some kind of openness in the provider WAN, with this you need even more openness and the current BGP trust-model would have to be extended. I don’t see this happening very soon.

SDN: Capability or Context?

After a good debate, started on Twitter, Michael Bushong wrote a great post trying to move the SDN discussion away from technology. He is even trying to move it away from a pure process context, the one I was tending to. Because the term SDN is so vague I started to see it as a movement, just like DevOps. But Michael is correct that that is also not the right path. He is trying to define SDN more in terms of contexts:

Delegation, Abstraction & Globality

Details of these contexts are in Michael’s post. Of course I am going to explore if this list is complete or not, but for now I can’t think of any additions. His conclusion is even better:

it is entirely possible to build open, controller-based systems that fail to deliver against any of the promises of SDN, just as it is possible to use existing technologies in new ways

In the end SDN, or networks as a whole, should not be a goal in itself. It’s always a means to deliver business goals.

Trying to Explain SDN to a Kid

Today, I told my son about a great discussion I had last night on Twitter and that there were people from different time zones involved. When he then asked me what it was about I told him it was too technical, but when he insisted I had to explain to him the concept of SDN. For me this was a good thing, because it forced me to rethink the definition of SDN and translate it into something a kid could understand.

But first I had to explain to him the concept of networking; no I have never tried that before. He is playing a lot of Minecraft and sometimes he is playing that online on a server. So I abused this by explaining the client - server concept and the boxes in between that are connecting the two. Of course there is not one path of boxes so the traffic between client and server can take different routes. In the old days, today, these boxes had all to be configured by hand and are configured to figure out by themselves what to do with traffic coming in.

In the SDN world all these boxes are connecting to a single controller application and are configured from this single application. There is no need anymore to configure/program ever single box. The controller application is kind of the Google Maps of the network. Whereas at first every box had to find out from it’s neighbours how the traffic had to flow, now Google Maps knows of all the sources and destinations and all the routes in between. When one box knows of something that Google Maps doesn’t know yet it updates the Map.

This is kind of where my explanation ended, but the good thing was that he understood it very well and could also see the benefits of such a way of working. The funny thing was that I explained about Google Maps, just because he knows it, but I started myself this week to use Wazer. A social mapping system, acquired by Google by the way. The thing is that all the Wazer apps driving around update the centralized map almost automatically. Of course some checking is being done by humans. To make the analogy to SDN complete. When another Wazer updates the map or puts a roadblock on the map, my application gets instantly updated and my route is changed.

Funny how explaining things to kids clears things up. My explanation did not include overlay networks by the way. I’ll leave that for another time.

An in Depth View of 3 SDN Technologies

Pete Welcher did a great job in describing, even comparing, what he thinks will be biggest SDN technologies. His premise is that these products

are likely to have much greater impact than SDN and other control products from smaller vendors.

Only time will tell, because in theory, by dividing the software from the hardware, it will be possible for pure software vendors to enter the networking market (and deliver a better job). That said, it is up to the customers of SDN to decide if they want to demand software with a hardware deal or buy it seperate.

I love that Pete is referring to something I say a lot when it comes to SDN:

I will say the word “LANE” (as in ATM LAN Emulation) at this point. The concept of “what device do we reach various MAC or IP addresses via” has come back again. This time, in IP tunnel form rather than ATM circuit form.

That’s it, we have been doing SDN for a long time, but apparently now is the time to market/hype it.

Please read Pete’s great post and all the rest in his series, but also don’t forget to look at some of the other technologies like Juniper’s Contrail which are not part of his “big” vendors.

5 Network Management Resolutions for 2014

Everybody can write down some resolutions or predictions for the upcoming year, but depending of the subject you need more expertise to do trustful predictions. I think the EMA has shown expertise in the field of networking and management. Here are my rewrites and two cents of their predictions:

  1. APIs are going to be hot. Programmability of the network is getting traction and in order to make good use of that you need to engage with APIs. There is however a whole slew of legacy networks that have CLI as their only “API”.
  2. Leveraging automation wherever possible. Enforce change management procedures by preferably reducing the of CLI. Backups of configurations are a good thing, but are second best, always walking behind changes and problems. Pro-active change-management uses a top-down approach.
  3. Tear down the walls. Not only between netops and sysops, but also between development and operations. Only when all of these teams collaborate together performance and stability of the network will rise. The clear division between networks and systems is getting vague as we will see in the next point.
  4. Virtual networking is hot. If the networks are not agile enough for systems and applications than the latter will get around the network teams by building overlays on top of the legacy networks. If this doesn’t go together with the previous point I foresee some serious troubleshooting issues.
  5. Performance and monitoring. This has always been top priority but with the mindset of programmability and automation of the network this creates extra bonus points. This is a good time to leverage the available data in the network.

No word about SDN. But as you can see, these points combine very well with SDN. In fact, I think it is even better to describe the goals for the upcoming year in above terms. SDN is just an enabler, but must never be a goal on itself.

Apple iOS Backup and Restore

These holidays a lot of Apple hardware changed owner within our family and friends. As always it is my job to organize all the backups and restores. With proper planning everything went well, but I also learned some little differences I want to share with you.

First of all, most of the backups (and restores) went through the iCloud. This works great and did not give any problems. Every device from iOS 5 up can use iCloud as a backup and restore mechanism. I had one iPhone 4 that was still running iOS 4. So this one I had to do via the iTunes backup/restore procedure. Effectively this gives the same result as iCloud.

Backup and restore with exactly the same device and iOS is the easiest. You will restore all your Apps, Apps settings and iOS settings, including passwords.

A restore can only be done to the same iOS or newer, but to every type of device. So you can restore an iPad to an iPhone and vice versa. When you apply such a restore you will get all your Apps (when device supported) and App settings back. You will however lose your iOS settings, including mail settings, passwords, backgrounds and folders. But still, most of the information is intact.

When you restore to the same device type, but a different device and/or newer iOS you will get everything back except for passwords.

In my opinion these procedures work great and the things you can’t restore are in my view expected and accepted. Apple did a great thing here and I love the iCloud option they introduced in iOS 5.

Software Defined 2014

Let me start by wishing everybody a happy and successful 2014!

With all the reviews, predictions and resolutions on the web I am not trying to do anything like that. If you thought that 2013 was the year of Software Defined, 2014 will be the accelerator.

Just like cloud, Software Defined is a hype. But it’s a hype that will change the way we operate networks in the next 18 months. Software Defined is not a thing, not a role, not even a technology. Software Defined is a movement, just like DevOps. In some way the two are related, they are both changing the way we operate and maintain networks. They are both tearing down the walls between dev and ops and between network and systems.

Applications and services will be at the heart of what we deliver. All the rest is commodity. The end-user is going to define what, how and when he needs them, and the network and systems have deliver them, as agile as possible. This is where both Software Defined and DevOps come in to play.

This year we will see some massive Proof Of Concepts of SDN and NFV. We will even see some projects going into production. Was 2013 the year where we had a hammer and where looking for the nail? This year we will find the nails and we will see which hammers will be successful.

We will have a very exciting year ahead.

Microsoft Releases Lync API for SDN

I was pretty excited yesterday when the news came out that Microsoft is releasing a Lync API for SDN. Which is the result of a successful proof of concept earlier this year. Not that I am a big Microsoft fan, but the concept can be translated into other solutions.

HP and Microsoft demo

The concept is very simple. The SDN controller has full visibility of the network and the Lync server has full knowledge of communication devices. When Lync clients setup a connection and try to set CoS/QoS tags, the Lync server can authorize these markings. Other markings of this kind will be removed when packets access the network. Other possibilities are real-time traffic engineering for these flows.

Translating this concept to other technologies is interesting. Of course there are the known real-time protocols, like VoIP and video-services. But this can also count for other kind of services like storage, backup and applications. Flows through the network can be authenticated and dynamically optimized. You can even go a step further and enable this kind of things for OTT-providers such as Netflix. Of course the latter will hold another challenge, which is that an SP-network must be SDN-driven and must trust third parties to dynamically configure their network.

As a reader of my blog you already know that I am an SDN skeptic. Not that I don’t believe in the solution itself. Just that I doubt the maturity of the protocol and vendors, together with the risk that customers are willing to take to invest in SDN. It is these kind of business enabling proof of concepts that will convince customers of the value of SDN, which in effect enables investments in SDN.

2014 will be the year of very serious business driven proof of concepts, which will accelerate transitioning networks from hardware driven to software defined.