Feeds:
Posts
Comments

In the digital-intense world we are living, one could argue that there is decreasing tendency in consuming content and sharing information in the form of traditional paper books. At the same time, my experience in consuming books in audio form (audible.com) and eBook pdfs (did no tried yet the Amazon Kindle or the Sony e-reader) has been very positive! I specially like the audio formats for my commutes, supermarket, administrative queues and outside jogging. I have heard great reviews about the Kindle 2, I hope it will be soon available outside the US, if not, I may consider to buy it anyways if  it is usable without the US wireless infrastructure…

In this post, I just want to share some of the latest books (from multiple disciplines) I have “consumed” and can really recommend:

 

An ”outlier” is a super-achiever, like Bill Gates, the Beatles and many others. The author unveils hidden factors that make people extremely successful, with compelling arguments and contradicting the readers’ intuition many times during the nice storyline.  You will find ou how important can be where and when you were born (e.g., premier league athletes mostly born in the first quarter of a year, IT billionaires like Jobs or Gates born around 1955), or why are Asians so good at math (spoiler: it is not in the genes)? This book recalls me another excellent common believe breaker: 

 

–  Brain Rules: 12 Principles for Surviving and Thriving at Work, Home, and School by John J., Ph.D. Medina

Network Coding Applications by Christina Fragouli and Emina Soljanin 

Continuing the enterprise of research in re-architecting the future Internet, I have started to delve into the world of “network coding”, a recent field of study (Ahlswede, 2000) that aims at solving an “information flow problem”
by leveraging forwarding nodes in a network with “content mixing” capabilities of data flows (packets) in addition to simply forwarding operations.

Even though the practical usage of network coding has yet to be proven in many real networking scenarios, network coding is being considered by major research industry players as part of  the next wave of networking.

The promise of network coding? Gains in terms of network throughput, resilience, security, simplicity… an alternative path to the current practices of boosting network performance, which is basically limited on new networking hardware versions with increased chip rate and memory sizes. 

In my opinion, network coding applied to future inter-networking architectures is an example of research by questioning paradigms and has the potential to introduce another shift in internetworking with an impact comparable to the information theory work of Shannon 60 years ago.

In this post, I will not introduce gratuitous maths and non-rigorous explanation on network coding (please refer to the wast
literature, specially a book for theoreticians and and another one for practitioners). The point I want to make is some key observations that make me believe that network coding is an area worth of exploring for any future networking research project:

  • Big computer industry players (e.g., Microsoft, HP, Intel) are investing in network-coding applied research. There may be something ($$$) beyond pure academic research.
  • Pioneering research institutions around the world (e.g., Berkeley, MIT) are increasingly publishing the practical results of ongoing research projects.
  • Network coding is meeting legacy network settings, e.g., the TCP protocol (Sundajaran et al, INFOCOM ’09). We may see further “transparent” integration of network coding in real systems.

So far, so good. But, network coding is a tricky area. Even though it’s basic concept and the canonical example over a butterfly type of network is pretty simple, the actual field where network coding can be apply and the implementation options is very broad, spanning over all the layers of the traditional network stack:

I admit a taxonomy based on layers is blury below the network level, where “practical network coding” by Chou et al introduced the notion of mixing packets within generations.

Skepticism is also there (How Practical is Network Coding?). Or should I say good sense (Mixing PacketsPros and Cons of Network Coding”). We may assist to some phase of delusion surrounding network coding, if a typical Gartner’s hype cycle can be applied to the field of research.

Gartner's hype cycle

Question to the community? In which phase would you say is network coding (if research were a technology product)?

  1. “Technology Trigger”
  2. “Peak of Inflated Expectations”
  3. “Trough of Disillusionment”

I have my own list of questions when thinking about the practicality (implementability) of network coding:

  • How to decide which information to mix when operating over multiple flows?
  • How to carry effciently the coding operations to the final data consumers?
  • How to achieve butterfly type of network paths in the information-oriented network under sonsideration?
  • Differences and similitudes of network coding over wireless networks compared to wired deployments?
  • Feedback channel and applicability in two-way communications (real and non-real time)?
  • Security advantages and implications of network coding?
  • Interactions with active caching functionalities at multiple levels (packets, pages, documents)?

I will start thinking small, listing the requirements (e.g., multi-paths, identifier space, meaningfulness of packet generation, packet
headers) to perform a strawman approach for network coding in the context of information-oriented networking (e.g., the PSIRP project). Then, we can evaluate the costs and the practical benefits by extensive ns-3 simulations and may be some NetFPGA test implementations.

Whether network coders will eventually supplant routers in large, shared infrastructures like the Internet is very questionable, may be
in the long term as an additional network service… However, I think that we will se more and more real life (niche?) solutions implementing some flavour of network coding. Let it be an IPTV multicast deployment, or Instant Messaging dissemination protocols, error correction algorithms, switch designs or new variants of P2Pcontent distribution schemes like Microsoft’s Avalanche…

I am curious if Rudolf Ahlswede of the University of Bielefeld, Germany could imagine the impact of his research back in
2000. In this post, I have raised more questions than answers. Hopefully, I can turn this over during this promising 2009.

To end with, an optimistic quote of network coding experts:

“By changing how networks function, network coding may influence society in ways we cannot yet imagine.”

say FFROS, KOETTER and MÉDARD.

-Ch.

P.D: I found another optimistic (press-type) reference related to the information-oriented research area: PcMag includes the Van Jacobsen’s content-centric networking (CCN) as one of the “five ideas that will reinvent modern computing“, although I dislike very much the term “Extreme Peer-to-Peer” used by the PcMAg redactors.

P.D2: I could not resist not to google what the blogsphere has also commented on this topic:

Back to Research: Network Coding and a Small Riddle for You
Network Coding for Mobile Phones
– Do you know more?
Selected publications on network coding:

  • Network Information Flow. R. Ahlswede, N. Cai, S.-Y. R. Li and R. W. Yeung in IEEE Transactions on Information Theory, Vol. 46, No. 4, pages 1204-1216; July 2000.
  • T. Ho and D. S. Lun. “Network Coding: An Introduction. Cambridge University Press, Cambridge, U.K., April 2008.
  • Fragouli, C. and Soljanin, E. 2007. Network coding applications. Found. Trends Netw. 2, 2 (Jan. 2007), 135-269. DOI= http://dx.doi.org/10.1561/1300000013
  • Information Theory and Network Coding by Raymond W. Yeung, The Chinese University of Hong Kong, Springer, August 2008
  • Linear Network Coding. S.-Y. R. Li, R. W. Yeung and N. Cai in IEEE Transactions on Information Theory, Vol. 49, No. 2, pages 371-381; February 2003.
  • An Algebraic Approach to Network Coding. R. Koetter and M. M?dard in IEEE/ACM Transactions on Networking, Vol. 11, No. 5, pages 782-795; October 2003.
  • Polynomial Time Algorithms for Multicast Network Code Construction. S. Jaggi, P. Sanders, P. A. Chou, M. Effros, S. Egner, K. Jain and L.M.G.M. Tolhuizen in IEEE Transactions on Information Theory, Vol. 51, No. 6, pages 1973-1982; June 2005.
  • T. Ho, M. Medard, R. Koetter, D. R. Karger, M. Effros, J. Shi and B. Leong. A Random Linear Network Coding Approach to Multicast. In IEEE Transactions on Information Theory, Vol. 52, No. 10, pages 4413-4430; October 2006.
  • Jay Kumar Sundararajan and Devavrat Shah and Muriel Medard and Michael Mitzenmacher and João Barros. Network coding meets TCP. CoRR, (abs/0809.5022) 2008. [BibSonomy: dblp] URL
  • P. Chou, Y. Wu, and K. Jain, “Practical network coding,” 2003. [Online]. Available: http://citeseer.ist.psu.edu/chou03practical.html
  • Barros, J.; “Mixing Packets: Pros and Cons of Network Coding”, Proc Wireless Personal Multimedia Communications Symp. – WPMC, Lapland, Finland, September, 2008.

-Ch.

For a long time now, funding agencies around the world have been promoting the research towards the so-called future Internet. Clean-slate design has been a buzz term for networking project proposals.

Today’s use of the Internet arises well known limitations in terms of mobility, security, address space exhaustion, routing and content delivery efficiency. Continuously patching the Internet with ad-hoc protocol extensions and overlay solutions (overlays, CDN, P2P, DPIs, NAT-aware protocols, MIP) is a complex and costly solution for the long term.

Research to circumvent current Internet limitations can be divided into those advocating a completely new architecture (clean-slate), and those defending an evolutionary approach due to incremental deployability concerns. From a research perspective, clean-slate design does not presume clean-slate deployment and aims at innovation through questioning fundamentals [Slide 3 of PSIRP public presentation].

Think out of the TCP/IP box

A key question is to what extent a new paradigm thinking ‘out-of-the-TCP/IP-box’ for the future network is really necessary, e.g., as packet switching was to circuit switching in the 70’s. The reasoning is based on the large scale use of the Internet for dissemination of data [Jac06]. Tons of connected devices are generating and consuming content, without caring about the actual data source as long as integrity and authenticity are assured [DONA].

Information-oriented / content-centric / data-oriented networking

The Internet has shifted from being a simple host connectivity infrastructure to a platform enabling massive content production and content delivery, transforming the way information is generated and consumed. From its original design, the Internet carries datagrams inserted by sending hosts in a best effort manner, agnostic to the semantics and purpose of the data transport. There is a sense that the network could do more and better given that today’s use of the network is about retrieval of named pieces of data (e.g., URL, service, user identity) rather than specific destination host connections. TCP/IP is inherently unfair and inefficient for data dissemination purposes (e.g., multiple flows of P2P applications, redundant information over the wires, etc.). With this in mind, the enhancements of new internetworking layer should not be limited to QoS or routing efficiency: data persistence, availability and authentication of the data itself are beneficial in-network capabilities to be embraced from design [DONA].

Last decade’s efforts towards a next generation Internet, whether clean slate or evolutionary, have mainly focused on end-host reachability, with novel concepts (e.g., id/loc split) addressing the ’classic’ end-to-end security, mobility and routing issues. The common denominator of these proposals is host-centrism.

Research in a new generation Internet has prompted architectural proposals (e.g., FARA, Plutarch, UIP, IPNL, TRIAD, ROFL, NodeID) that mainly aimed at solving the “old” host connectivity and point-to-point communication problems. At the core of these new architectures are more flexible, expressive, and comprehensive naming and addressing frameworks than the Internet hierarchical IP address space.

However, this trend is changing, and senior researchers that have participated in the Internet development since its beginning, have advised to tackle the future Internet problem from an information interconnection perspective.

Van Jacobsen provides a vision [Google video talk] to understand the motivation for a networking revolution; while the first networking generation was about wiring (telephony) and the second generation was about interconnecting wires (TCP/IP), the next generation should be about interconnecting information at large (Content-centric networking) [JAC06]. This shift in the orientation of network architecture design implies rethinking many fundamentals by handling information as a first-class object.

We can also observe this shift toward information-centric networking in the momentum of service oriented architectures (SOA) and infrastructures (SOI), XML routers, deep packet inspection (DPI), content delivery networks (CDN) and P2P overlay technologies. A common issue is the necessity to manage a huge quantity of data items, which is a quite different task than reaching a particular host. In today’s Internet, forwarding decisions are made not only by IP routers, but also by middleboxes, VLAN switches, MPLS routers, DPIs, load balancers, mesh routing nodes and other cross-layer approaches. Moving down data-centric functions to the lower networking layers could be in tune with the trend in access and backbone technologies represented by the coupling of the dominant Ethernet access protocol and label switched all optical transport networks.

Only time will tell whether revolutionary networking concepts get commercially deployed. History has shown that economics and not purely technological arguments is what ultimately turns prototypes into reality. Recent concerning events (and more to come) may potentially promote and accelerate the adoption of new internetworking paradigms.

Our days economy is Internet-sensitive, service outages due to DDoS attacks or due to limitations of BGP insecure routing (remember Pakistan Telecom Youtube shut-down?) carry important worries and expenses: Internet reports claim potential costs of $31.000 per minute for Amazon’s two hour outage in June 2008.

Furthermore, end-users suffer from threats coming from the network such as evolving phishing methods and new forms of SPAM such as SPIT (over IP telephony) or SPIM (over instant messaging), that may end up frustrating the up today so successful Internet-based communication’s experience.

A recent move towards information-centric can be observed in projects addressing the future internetworking mod such as Trilogy, 4ward, EIFFEL, PSIRP, ICT’s FIRE and other activities in the frameworks of EU FP7 and NSF FIND. Similar in spirit, data-centric architectural proposals up today include the DOA, i3, DONA, Haggle and RTFM, in addition to ’peer-to-peer’, ’content-delivery’, ’sensor’ and ’delay-tolerant’ networks.

More than an endless discussion [Darwin] around clean-slate design and actual network (r)evolution deployment, what we really need for the future Internetworking is 1) ‘clean-slate thinking’ beyond the TCP/IP heritage to foster innovation through questioning paradigms; and 2) feasibility work on an information-oriented infrastructure capable of supporting the actual and future demands over the network of networks.

This post is the motivation and background of my current research work [SPSWITCH], now in cooperation with Ericsson Research and the EU FP7 PSIRP project.

SPSwitch

References

[JAC06] V. Jacobson. If a clean slate is the solution what was the problem? Stanford ”Clean Slate” Seminar., Feb 2006. [Google video talk]

[RTFM] M. Särelä, T. Rinta-aho, and T. Tarkoma. RTFM: Publish/subscribe internetworking architecture. ICT Mobile Summit, Stockholm., June 2008.

[PSIRP] http://psirp.org

[Darwin] What would Darwin Think about CleanSlate Architectures?

[DONA] T. Koponen, M. Chawla, B.-G. Chun, A. Ermolinskiy, K. H. Kim, S. Shenker, and I. Stoica. A data-oriented (and beyond) network architecture. SIGCOMM Comput. Commun. Rev., 37(4):181–192, 2007.

[SPSWITCH] C. Esteve Rothenberg, Fabio Verdi and Mauricio Magalhaes. “Towards a new generation of information-oriented internetworking architectures” ACM CoNext, First Workshop on Re-Architecting the Internet (Re-Arch08). Dec. 2008, Madrid, Spain. [PPT] [PDF] [bibtex]

History has shown that business (plus some timing components) and not pure technology is what turns prototypes into reality. In case there are alternatives competing for exactly the same place in the ecosystem, as in nature, only one will survive – and probably not the best from a technology point of view. So far, so good.
But, what if there is no alternative product and a technical solution is killed because it can change the ecosystem, the walled garden? The motivation for asking myself this is a documentary I recently saw and do recommend:

Who Killed the Electric Car?

Don’t ask me where to get it, try your favorite content delivery system.

It hurts to see how good technology enabled by tons of men/day efforts and brilliant ideas can be frozen due to sound business models. Curiously, the electric car is gaining momentum again with the global warming issues and the energy crisis.

I was thinking about similar documentaries that could be done in the computer and network industry, beginning with the historical IBM, OSI, EU vs USA standards battles up to recent high definition video formats (there are always leassons to learn). Of course there are always fair technology wars. Did NAT kill IPv6? If it can be considered dead… We could also me more kind and think of new a new title  like “Who pushed XYZ forward?”.

Talking about good content, free and legal available in the network of networks, I can only recommend you to look at the TED conference 20 minutes talks. Wy pick of today:


Recent projects have brought me to dive into the field of the Semantic Web, ontologies and their application to new networking paradigms in both, telco-driven Next Generation Network (NGN) architectures and Next Generation Internet (NGI) architectural proposals. Note that the term “next generation” has become a buzzword that often leads to confusion, since NGN as NGIs have very different focus when looking at future networks.

Telecom world (NGN): In order to make a reality the promise of fast time to market of fancy multimedia blended (prsence+IPTV+FMC+IM+voice+Push-to-X+…) services IMS alone is not enough. The IMS-based service layer of NGNs needs a Service Broker strategy in the Service Delivery Platform to orchestrate the operations involved in service activation, provisioning and finally service execution. SDPs are therefore moving towards service-oriented architectures (SOA) and ultimately towards so called Service Oriented Infrastructures (SOI) – a middleware infrastructure  that natively supports XML processing and all configurable infrastructure resources (compute, storage, and networking hardware and software to support the running of applications). Service Oriented Network Architecures (SONA) is also a term commonly being used for these new event-driven, service-centric architectures, however I dislike it due to the existence of commercial solution by Cisco using this name (nothing at all against Cisco, just prefer to stay with generic terminology).

Time to market of new services over SOA-based SDPs is still too high when you must consider the integration of legacy and heterogeneous OSS/BSS systems of a big telco. An incumbent fixed and mobile operator present in several countries faces the challenge of having heterogeneous service delivery frameworks and trying to deliver the “same service” in the different localities becomes a nightmare due to system heterogeneity and the need of local customization. Not a surprise why major telcos are going the way of outsourcing network operations and opening the service platforms to third parties. We may see up to 100% outsourcing with new business models in a near future, with traditional telcos acting as service supermarkets, putting the infrastructure, branding, auditing the services, billing (! always) and letting service providers the tedious task of managing the SDP and even pushing their products to the front (I hope the analogy is clear, in a future post I will try to explore more this issue).

The main point is that the way telco R&D sees to overcome these heterogeneity issues is not just SOA (XML, Web Services) but it should also include the notion of semantics and the concepts of ontologies (DAML, OWL, RDM). The most relevant effort is the EU SPICE project, and a good paper to understand this approach is:SPICE: Evolving IMS to Next Generation Service Platforms. XML provides standardized data, but only leveraging this data conectors with semantics you are in place to deal with aggregation, adaptation, composition, integration and personalization of heterogeneous systems involved in the service environment. Equipment vendors are already looking on how to incorporate the required semantics and ontologies to implement the Service Orchestration in their SDP solutions, the SCIM in the IMS architecture is a good starting point, but service provisioning and activation in OSS systems are also major elements that could benefit from these semantic enhancements.

Knowledge plain in the SPICE project.
Interestingly, the research coming from the clean-slate wave of future Internet projects (NGI) has also pointed out the requirement of semantics in the new Internet, leading to the notion of a A Knowledge Plane for the Internet. I observed this tendency only recently in a post from Dirk Trossen on the architectural vision called tussle networking (slides here). The EU PSIRP project proposes a new information-centric Internetworking architecture and foresees a kind of knowledge plane based also on the Semantic Web techniques to govern the new networking patterns.

D.Trossen slides

This knowledge plane aims at having a higher-level view of the network. Being an information-aware system that uses artificial intelligence and cognitive techniques to solve the conflicts (tussles) in the network between the different actors (users, network resources, business, organizations). It bases on end-system involvement and is a distributed plane that needs to correlate information from different points in the network. It is supposed to recursively, dynamically and autonomously compose and decompose with the scale of the network. Yes, I also think it may be sound very abstract, futuristic and ambitious, but I am starting to see that this network behavior might be  implementable with enhanced techniques from the Semantic Web. Furthermore, I believe it is the responsibility of network research to think out of the box and try new paradigms, thereby overcoming our often inability to see beyond what is there (this is a topic worth to debate in a future post).

My point is: Once you leverage SOA-based interfaces and database interactions with an ontology-enabled semantic scheme, you are in place to construct this “knowledge plane”, whether for your SDP or for the global Internet – attending in each case the different requirements in terms of performance and scalability. With the gathered mesh of information, the knowledge plane is then in place to behave and perform the reasoning processes to tackle the particular networking tussles.

A practical example coming from the most innovative areas in Internet research in the last decade: CDNs and P2P systems. In my opinion, CDN are already acting somehow as a knowledge agent gathering information about network performance and user location with the goal of redirecting user requests to a concrete (the most convenient) surrogate server. A smart use of this CDN knowledge is a recent P2P project that benefits from the “users vicinity information” granted by the CDN to build the peer links enhancing thus the quality and performance of the overall P2P network.

Yours,

Christian.

P.D. Note the similarities around knowledge from the cited works:

NGI perspective: A Knowledge Plane for the Internet:

A network with a knowledge plane, a new higher-level artifact that addresses issues of “knowing what is going on” in the network.

At an abstract level, this is a system for gathering observations, constraints and assertions, and applying rules to these to generate observations and responses.

At the physical level, this is a system built out of parts that run on hosts and servers within the network. It is a loosely coupled distributed system of global scope.

NGN perspective: SPICE Knowledge Services:

Knowledge services provide access to various knowledge gathered from the web, network operators, user profiles and information about local resources.

Local resources are typically discovered by the user terminal and may include nearby devices, accessible networks, local services and sensor data gathered from wireless sensor networks.

The generic form of a knowledge service is called a knowledge source and provides an interface for querying and subscribing to knowledge and register with a knowledge broker.

A knowledge broker is used to find knowledge sources that are able to answer a specific query. Specialized knowledge services include reasoners and recommenders that derive knowledge for personalized end-user services and offer more specific interfaces.

The knowledge layer is in itself a service oriented architecture whose components are used by value added services.

This week I stumbled across the Advanced Multimedia System (AMS) term and my immediate reaction was to think in a new buzz flavour of the IP Multimedia Subsystem (IMS). After a first look at the available documentation I recalled having seen some ITU H.XYZ activities over a year ago in the very recommendable ITU seminars. Indeed, they are more than related:

The Advanced Multimedia System (AMS) project was formerly referred to as “project H.325” (“H.323, SIP: is H.325 next?” the presentation I had seen a year ago) and aims at driving the development of a third generation multimedia terminal and system architectures able to support emerging, media rich applications that fall outside the bounds of traditional call-based communication platforms (sounds like IMS, doesn´t it?)

I spent some time going through the available material trying to understand what is behind AMS, and the TLDR version of it is something that 1) it may be confused with IMS and 2) it still in its very early infancy. The Advanced Multimedia System (AMS) is a new multimedia system project driven by the ITU in the requirements-gathering phase, see the formal project description. AMS is viewed as the successor system to the 12 year old legacy H.323 and SIP systems.

Googling a little more I came on a post from Radvision that confirmed my impression that there was no real connection between AMS and IMS (besides the unfortunate use of a similar acronym). Even more unfortunate considering the outcome from the Advances to IMS (A-IMS) initiative.

Frens Jan Rumph compared AMS and IMS in terms of charging and billing highlighting that IMS is a network designed to make money and AMS was to be designed for service provision around users. Users are empowered to coordinate its multimedia activities using the modes that best fit their personal/business situation and their needs or desires.

Rather than focusing on enabling multimedia telephony and “the fancy blended/bundled service you like” on top of IMS, AMS promises a user-centric environment with many AMS-enabled devices (portable wireless, home entertainment, computer-based devices) supporting (aka a container) many applications and services in either a peer-to-peer or network-provided fashion.

My understanding, rather than a replacement for IMS, AMS comes to fit something that was deliberately left out of 3GPP IMS specifications, the service and application layer. Of course there must be some new underlying system design that supports the AMS environment, but I guess AMS could be approached as currently done by Service Delivery Platforms (SDP) solutions, bridging “seamlessly” the legacy and the new generation networks based on IMS or whatever NGN control subsystem. Furthermore, given the timing differences and status of the recommendations, AMS is definitely not something to “care about” in the short or mid term.

AMS cocept

Figure 1: AMS Container-App concept [Source: Packetizer]

AMS will define the procedures for application-to-application communication through the AMS-enabled network. The ITU study is expected to cover among others:

  • Downloadable codecs
  • System decomposition
  • Discovery of services
  • Support for transcoding functionality (e.g. text to speech)
  • Dynamic device discovery
  • Application plug in
  • Consideration of various business models
  • Integrated QoS, security and mobility functionality

The goal of the AMS project is to create a new multimedia terminal and systems architecture that supports distributed and media rich collaboration environments. The targeted applications include highly converged media applications involving multiple personal and public devices, enterprise systems and network services in support of communications, collaboration and entertainment. Specifications arising from this project will enable the development of the terminals and systems, and also inter-communication between systems so applications involving multiple devices and mobile systems can be supported.

How will AMSbe related to ongoing work in the Open Mobile Alliance (OMA), the organization tackling the IMS/NGN service environment standardization? And talking about standardization… there will be again the discussion on to standardize or not standardize (Industry standards vs. proprietary technologies). What does actually hamper or promote innovation? What makes real-world interoperability possible? De-facto standards? ITU-T is not famous because of its standards development agility, and with regard to services, the Web 2.0 lesson is that our days there is no time to stop and sit together to standardize Internet-based services and thereby loose time-to-market. There is just time to keep adopting the programming trends (WS, REST, ruby, …), define and reuse APIs as much as possible to reach scale in the Web mesh.

To conclude with, AMS is definitely something worth to keep an eye on, or may be some thing better, an opportunity to participate and contribute from almost the very first minute.

Comments and discussion very welcome!

Christian.


P.D: Timeline of AMS:

Past Milestones:

  • CfR Issued: SG 16 WP2 Rapp. meeting, Biel, Switzerland, 17 – 20 May 2005
  • Initial CfR Responses: Contributions into the SG 16 Meeting, Geneva, Switzerland, 26 July – 5 August 2005
  • Workshop titled “H.323, SIP: is H.325 next?” (San Diego, 9-11 May 2006)
  • Agreement in SG16 to create a new Question to study AMS (July 2007)
  • Creation of the AMS project description (September 2007)
  • Final approval of new Question 12/16 to develop AMS (June 2008)

Next Steps:

  • Collection of requirements continues with architecture inputs expected, input contributions are requested for the next meetings
    (Chapel Hill, 25 – 27 June 2008 and Geneva, 25-29 August 2008).
    For submitting Contributions, see the instructions at the meeting website
  • Contributions into the SG 16 Meeting, Geneva, Switzerland, 27 January – 6 February 2009*
    (deadline: 16 January 2009 to the TSB at tsbsg16@itu.int)
  • Completion (depends on input contributions) – 2010*

(*: Tentative dates)

I have resisted, myself, for a long time, to start a blog… What was I suppose to write about? My private life? My friends already know what I up to and others would not care what I am doing… My job? My concerns? Mmhhh, may be…

During this last year, I have been fed by a lot of interesting blog posts and I got involved in very interesting discussions gaining insights on topics and issues I never imagined. I think that has triggered my willingness to participate more actively in this information sharing process.

So, my own gift for this year´s birthday (23rd of June) is to start this blog with the intends of publishing on the topics that run in my head, trying to come out of the box, and hoping to achieve some regularity on posting, where regularity is something not part of my set of skills.

Summing up my motivation and goals of this academic-research/technology blog:

  1. I realized that putting into text ideas flopping in your head helps to digest and analyze them. Then you can easily share them and receive feedback that may ultimately help in assessing your ideas and encourage true innovation.
  2. Blogging naturally fits with the spirit of research, allowing to express your ideas about your research, in a less formal way than by writing a paper or technical report.
  3. Blogging on research enables linking with diverse researchers whose varied interests and points of view keeps your mind open and fresh. Sharing your thoughts with a potential large Internet community helps to get in touch with people with common concerns ultimately promoting constructive critics and creative thinking.

Please feel free to comment, send feedback, and give me time to write nonsenses.

Christian.

References: