Archive for the ‘research’ Category

I am ashamed! I am ashamed of not having held to my promise of continuously blogging on the topics I am interested and believe are worth to share with Internet fellows. It is not that I have been unemployed or in sabbatical,  on the contrary, many achievements and good things happened since the last post on the QuagFlow developments (dated on July 7, 2010).

Lazy to blog about them? To be honest, it may be part of the reason… Priorities change over time, both in personal and professional affairs. Fortunately, the spirit of openness and intellectual sharing is very much alive, only not through this blog (or my underutilized twitter accounts @chesteve @futnetcpqd, more reasons for shame), a fact I intend to turn over. I have at least three unfinished post drafts that should have seen the light… Better never than ever! In this welcome back post I will report on some highlights during the past 12 months. In upcoming posts I will finish and release the existing drafts and provide regular updates on my current activities (if lucky, some of them even with some technical talent) as a Research Scientist at CPqD and happy human being settled in Campinas, Brazil.

What has happened since 07/07/2011?

This is a very incomplete and subjective (work-oriented) list of good things that happened since the last blog post. Apologies for the items I forgot to list! It is worth to note however that the last blog activity dates to late September 2010 via the comments. The discussion initiated by Carlos ‘Bill’ Nilton has become  a true cooperation with Unirio and CPqD and Bill has been of incredible value to our project, yielding very useful code contributions in the best spirit of open source materialized in the form of recent joint publications! I can fairly say that without this blog I would not have had the chance of working together with Bill, a reason alone that should fuel my energy to blogging  in 2011 and beyond!
Me and Carlos Macapuna, the day of Paragliding in Andradas. Looking forward to continue flying high!

Me and Carlos Macapuna, the day of Paragliding in Andradas. Looking forward to continue flying high!


Read Full Post »

Over the last 12 months I have been very lucky to be able to make a top-conference tour:
CoNEXT in Madrid, Infocom in Rio, SIGCOMM in Barcelona and two Future Internet summer schools: 4Ward FISS in Bremen and Trilogy FISS in Louvain-le-Neuve. Thanks to all the organizers!!

sigcomm barcelona banner

I had the chance to meet very nice people from around the world, have great technical discussions, and of course a great time during the social events and the tourist activities. Sharing experiences with other PhD students has been very fruitful! Being able to hear in person the visions of tier-1 international researchers and getting first hand feedback of your own work …. priceless!

Poster enabling forwarding plane, van jacobson

Poster on enabling an information-centric forwarding plane. Discussion with Van Jacobson on CCN.

trilogy poster

poster session trilogy

– More photos from the Trilogy summer school
– And from SIGCOMM09

Now, the event season continues in Brazil. This week we are organizing our First International Workshop on New Architectures for Future Internet. The event will be streamed live and the material from the talks will be posted online. We plan to write and publish a summary report gathering the essence of the discussions and our conclusions for future Internet research activities. Next week, we will have the opportunity to report on Future Internet research at the XXVII Simpósio Brasileiro de Telecomunicações (SBrT 2009).

Joining the trend of using Web 2.0 technologies to make conferences more attractive and useful (with SIGCOMM09 being a remarkable example), we have set up the following information channels. Please feel free to join them. We will offer twitter to let remote participants participate in the Q&A sections and the discussion panels:

Live streaming of the event
– Twitter futnetcpqd channel for live conversations and remote questions to the speakers
– Linkedin: Join the Future Internet CPqD Open Research Group
– Flickr page with pictures from the workshop

Next time I will post on thoughts about the upcoming events.

Read Full Post »

Continuing the enterprise of research in re-architecting the future Internet, I have started to delve into the world of “network coding”, a recent field of study (Ahlswede, 2000) that aims at solving an “information flow problem”
by leveraging forwarding nodes in a network with “content mixing” capabilities of data flows (packets) in addition to simply forwarding operations.

Even though the practical usage of network coding has yet to be proven in many real networking scenarios, network coding is being considered by major research industry players as part of  the next wave of networking.

The promise of network coding? Gains in terms of network throughput, resilience, security, simplicity… an alternative path to the current practices of boosting network performance, which is basically limited on new networking hardware versions with increased chip rate and memory sizes. 

In my opinion, network coding applied to future inter-networking architectures is an example of research by questioning paradigms and has the potential to introduce another shift in internetworking with an impact comparable to the information theory work of Shannon 60 years ago.

In this post, I will not introduce gratuitous maths and non-rigorous explanation on network coding (please refer to the wast
literature, specially a book for theoreticians and and another one for practitioners). The point I want to make is some key observations that make me believe that network coding is an area worth of exploring for any future networking research project:

  • Big computer industry players (e.g., Microsoft, HP, Intel) are investing in network-coding applied research. There may be something ($$$) beyond pure academic research.
  • Pioneering research institutions around the world (e.g., Berkeley, MIT) are increasingly publishing the practical results of ongoing research projects.
  • Network coding is meeting legacy network settings, e.g., the TCP protocol (Sundajaran et al, INFOCOM ’09). We may see further “transparent” integration of network coding in real systems.

So far, so good. But, network coding is a tricky area. Even though it’s basic concept and the canonical example over a butterfly type of network is pretty simple, the actual field where network coding can be apply and the implementation options is very broad, spanning over all the layers of the traditional network stack:

I admit a taxonomy based on layers is blury below the network level, where “practical network coding” by Chou et al introduced the notion of mixing packets within generations.

Skepticism is also there (How Practical is Network Coding?). Or should I say good sense (Mixing PacketsPros and Cons of Network Coding”). We may assist to some phase of delusion surrounding network coding, if a typical Gartner’s hype cycle can be applied to the field of research.

Gartner's hype cycle

Question to the community? In which phase would you say is network coding (if research were a technology product)?

  1. “Technology Trigger”
  2. “Peak of Inflated Expectations”
  3. “Trough of Disillusionment”

I have my own list of questions when thinking about the practicality (implementability) of network coding:

  • How to decide which information to mix when operating over multiple flows?
  • How to carry effciently the coding operations to the final data consumers?
  • How to achieve butterfly type of network paths in the information-oriented network under sonsideration?
  • Differences and similitudes of network coding over wireless networks compared to wired deployments?
  • Feedback channel and applicability in two-way communications (real and non-real time)?
  • Security advantages and implications of network coding?
  • Interactions with active caching functionalities at multiple levels (packets, pages, documents)?

I will start thinking small, listing the requirements (e.g., multi-paths, identifier space, meaningfulness of packet generation, packet
headers) to perform a strawman approach for network coding in the context of information-oriented networking (e.g., the PSIRP project). Then, we can evaluate the costs and the practical benefits by extensive ns-3 simulations and may be some NetFPGA test implementations.

Whether network coders will eventually supplant routers in large, shared infrastructures like the Internet is very questionable, may be
in the long term as an additional network service… However, I think that we will se more and more real life (niche?) solutions implementing some flavour of network coding. Let it be an IPTV multicast deployment, or Instant Messaging dissemination protocols, error correction algorithms, switch designs or new variants of P2Pcontent distribution schemes like Microsoft’s Avalanche…

I am curious if Rudolf Ahlswede of the University of Bielefeld, Germany could imagine the impact of his research back in
2000. In this post, I have raised more questions than answers. Hopefully, I can turn this over during this promising 2009.

To end with, an optimistic quote of network coding experts:

“By changing how networks function, network coding may influence society in ways we cannot yet imagine.”



P.D: I found another optimistic (press-type) reference related to the information-oriented research area: PcMag includes the Van Jacobsen’s content-centric networking (CCN) as one of the “five ideas that will reinvent modern computing“, although I dislike very much the term “Extreme Peer-to-Peer” used by the PcMAg redactors.

P.D2: I could not resist not to google what the blogsphere has also commented on this topic:

Back to Research: Network Coding and a Small Riddle for You
Network Coding for Mobile Phones
– Do you know more?
Selected publications on network coding:

  • Network Information Flow. R. Ahlswede, N. Cai, S.-Y. R. Li and R. W. Yeung in IEEE Transactions on Information Theory, Vol. 46, No. 4, pages 1204-1216; July 2000.
  • T. Ho and D. S. Lun. “Network Coding: An Introduction. Cambridge University Press, Cambridge, U.K., April 2008.
  • Fragouli, C. and Soljanin, E. 2007. Network coding applications. Found. Trends Netw. 2, 2 (Jan. 2007), 135-269. DOI= http://dx.doi.org/10.1561/1300000013
  • Information Theory and Network Coding by Raymond W. Yeung, The Chinese University of Hong Kong, Springer, August 2008
  • Linear Network Coding. S.-Y. R. Li, R. W. Yeung and N. Cai in IEEE Transactions on Information Theory, Vol. 49, No. 2, pages 371-381; February 2003.
  • An Algebraic Approach to Network Coding. R. Koetter and M. M?dard in IEEE/ACM Transactions on Networking, Vol. 11, No. 5, pages 782-795; October 2003.
  • Polynomial Time Algorithms for Multicast Network Code Construction. S. Jaggi, P. Sanders, P. A. Chou, M. Effros, S. Egner, K. Jain and L.M.G.M. Tolhuizen in IEEE Transactions on Information Theory, Vol. 51, No. 6, pages 1973-1982; June 2005.
  • T. Ho, M. Medard, R. Koetter, D. R. Karger, M. Effros, J. Shi and B. Leong. A Random Linear Network Coding Approach to Multicast. In IEEE Transactions on Information Theory, Vol. 52, No. 10, pages 4413-4430; October 2006.
  • Jay Kumar Sundararajan and Devavrat Shah and Muriel Medard and Michael Mitzenmacher and João Barros. Network coding meets TCP. CoRR, (abs/0809.5022) 2008. [BibSonomy: dblp] URL
  • P. Chou, Y. Wu, and K. Jain, “Practical network coding,” 2003. [Online]. Available: http://citeseer.ist.psu.edu/chou03practical.html
  • Barros, J.; “Mixing Packets: Pros and Cons of Network Coding”, Proc Wireless Personal Multimedia Communications Symp. – WPMC, Lapland, Finland, September, 2008.


Read Full Post »

Recent projects have brought me to dive into the field of the Semantic Web, ontologies and their application to new networking paradigms in both, telco-driven Next Generation Network (NGN) architectures and Next Generation Internet (NGI) architectural proposals. Note that the term “next generation” has become a buzzword that often leads to confusion, since NGN as NGIs have very different focus when looking at future networks.

Telecom world (NGN): In order to make a reality the promise of fast time to market of fancy multimedia blended (prsence+IPTV+FMC+IM+voice+Push-to-X+…) services IMS alone is not enough. The IMS-based service layer of NGNs needs a Service Broker strategy in the Service Delivery Platform to orchestrate the operations involved in service activation, provisioning and finally service execution. SDPs are therefore moving towards service-oriented architectures (SOA) and ultimately towards so called Service Oriented Infrastructures (SOI) – a middleware infrastructure  that natively supports XML processing and all configurable infrastructure resources (compute, storage, and networking hardware and software to support the running of applications). Service Oriented Network Architecures (SONA) is also a term commonly being used for these new event-driven, service-centric architectures, however I dislike it due to the existence of commercial solution by Cisco using this name (nothing at all against Cisco, just prefer to stay with generic terminology).

Time to market of new services over SOA-based SDPs is still too high when you must consider the integration of legacy and heterogeneous OSS/BSS systems of a big telco. An incumbent fixed and mobile operator present in several countries faces the challenge of having heterogeneous service delivery frameworks and trying to deliver the “same service” in the different localities becomes a nightmare due to system heterogeneity and the need of local customization. Not a surprise why major telcos are going the way of outsourcing network operations and opening the service platforms to third parties. We may see up to 100% outsourcing with new business models in a near future, with traditional telcos acting as service supermarkets, putting the infrastructure, branding, auditing the services, billing (! always) and letting service providers the tedious task of managing the SDP and even pushing their products to the front (I hope the analogy is clear, in a future post I will try to explore more this issue).

The main point is that the way telco R&D sees to overcome these heterogeneity issues is not just SOA (XML, Web Services) but it should also include the notion of semantics and the concepts of ontologies (DAML, OWL, RDM). The most relevant effort is the EU SPICE project, and a good paper to understand this approach is:SPICE: Evolving IMS to Next Generation Service Platforms. XML provides standardized data, but only leveraging this data conectors with semantics you are in place to deal with aggregation, adaptation, composition, integration and personalization of heterogeneous systems involved in the service environment. Equipment vendors are already looking on how to incorporate the required semantics and ontologies to implement the Service Orchestration in their SDP solutions, the SCIM in the IMS architecture is a good starting point, but service provisioning and activation in OSS systems are also major elements that could benefit from these semantic enhancements.

Knowledge plain in the SPICE project.
Interestingly, the research coming from the clean-slate wave of future Internet projects (NGI) has also pointed out the requirement of semantics in the new Internet, leading to the notion of a A Knowledge Plane for the Internet. I observed this tendency only recently in a post from Dirk Trossen on the architectural vision called tussle networking (slides here). The EU PSIRP project proposes a new information-centric Internetworking architecture and foresees a kind of knowledge plane based also on the Semantic Web techniques to govern the new networking patterns.

D.Trossen slides

This knowledge plane aims at having a higher-level view of the network. Being an information-aware system that uses artificial intelligence and cognitive techniques to solve the conflicts (tussles) in the network between the different actors (users, network resources, business, organizations). It bases on end-system involvement and is a distributed plane that needs to correlate information from different points in the network. It is supposed to recursively, dynamically and autonomously compose and decompose with the scale of the network. Yes, I also think it may be sound very abstract, futuristic and ambitious, but I am starting to see that this network behavior might be  implementable with enhanced techniques from the Semantic Web. Furthermore, I believe it is the responsibility of network research to think out of the box and try new paradigms, thereby overcoming our often inability to see beyond what is there (this is a topic worth to debate in a future post).

My point is: Once you leverage SOA-based interfaces and database interactions with an ontology-enabled semantic scheme, you are in place to construct this “knowledge plane”, whether for your SDP or for the global Internet – attending in each case the different requirements in terms of performance and scalability. With the gathered mesh of information, the knowledge plane is then in place to behave and perform the reasoning processes to tackle the particular networking tussles.

A practical example coming from the most innovative areas in Internet research in the last decade: CDNs and P2P systems. In my opinion, CDN are already acting somehow as a knowledge agent gathering information about network performance and user location with the goal of redirecting user requests to a concrete (the most convenient) surrogate server. A smart use of this CDN knowledge is a recent P2P project that benefits from the “users vicinity information” granted by the CDN to build the peer links enhancing thus the quality and performance of the overall P2P network.



P.D. Note the similarities around knowledge from the cited works:

NGI perspective: A Knowledge Plane for the Internet:

A network with a knowledge plane, a new higher-level artifact that addresses issues of “knowing what is going on” in the network.

At an abstract level, this is a system for gathering observations, constraints and assertions, and applying rules to these to generate observations and responses.

At the physical level, this is a system built out of parts that run on hosts and servers within the network. It is a loosely coupled distributed system of global scope.

NGN perspective: SPICE Knowledge Services:

Knowledge services provide access to various knowledge gathered from the web, network operators, user profiles and information about local resources.

Local resources are typically discovered by the user terminal and may include nearby devices, accessible networks, local services and sensor data gathered from wireless sensor networks.

The generic form of a knowledge service is called a knowledge source and provides an interface for querying and subscribing to knowledge and register with a knowledge broker.

A knowledge broker is used to find knowledge sources that are able to answer a specific query. Specialized knowledge services include reasoners and recommenders that derive knowledge for personalized end-user services and offer more specific interfaces.

The knowledge layer is in itself a service oriented architecture whose components are used by value added services.

Read Full Post »