Technology that Translates Content to the Internet Protocol of the Future
The protocol that any device uses to connect to Internet, IPv4, (Internet Protocol version 4), has a problem: due to the Web's tremendous growth, all of its addresses have recently run out, worldwide. According to the experts, the solution lies in IPv6, a protocol that is in the early phases of implementation and that is expected to eventually replace its predecessor. However, there is another problem: the two protocols are incompatible. "Machines that only have IPv6 cannot communicate with those that only have IPv4, which is the case with the majority of those being used to connect to the Internet today, and vice versa", explains Marcelo Bagnulo, professor of the NETCOM research group at UC3M, where a solution to the problem has been developed.
Like Us on Facebook
The group's objective was to make it possible for the machines that, in the future, connect to the Internet using IPv6 addresses, to access earlier content, which will be in IPv4. In order to do this, these scientists have defined translators that permit understanding between contents in both protocols by means of a technology called NAT64 and DNS64, a standard used by the major manufacturers of routers, such as Cisco or Juniper, and the major sellers of DNS, such as BIND or Microsoft. "We have designed and standardized these transition tools, which have been adopted by the industry and which are now available commercially", states Marcelo Bagnulo, who is a professor in the Telematics Department and the Director of the Cátedra Telefónica de Internet del Futuro (Telefónica Internet of the Future Chair) at UC3M. "It is relatively easy to invent a new protocol, but it is extremely difficult to design one that is then really introduced and used, since standardization is an important step toward the future use of a technology", he explains.
Three keys to improving the Internet's future
This research, which was accepted for publication in the journal IEEE Communications, and which has two standard RFCs, falls within Trilogy, which received the award for the best project in the most recent Future Internet Award prizes, which are given by the ceFIMS (Coordination of the European Future Internet forum of Member States of the European Union). The objective of this project -whose name is derived from the trilogy "routing, congestion control and cost effectiveness"- is none other than to improve the quality of the flow of information and the internal workings of the Web, which is basically characterized by the interrelation of two systems. The first (routing) serves to define the route, while the second (congestion control) determines the quantity and volume of data that flow. "At present - points out Professor Bagnulo - they function independently, because the mechanism that decides where the data will flow through does not take into consideration how much other data are flowing through that same path". This means that, when there is congestion, the new data do not consider this and, therefore, choose an alternate path. The scientists make a comparison: It is as if there were no illuminated signs or notices sent by radio to warn of upcoming delays so that drivers can change their itinerary and, thus, avoid the traffic jam.
One of the main objectives of Trilogy is that these systems can function in a more coordinated manner. To do this, they propose various technologies that control and redirect data flow from the congested routes (as can occur in the case of peer-to-peer applications) to other less congested parts of the network. To this end, they have designed, implemented and standardized the Multipath TCP protocol in the IETF, which allows a connection of this type to flow through multiple paths. In the case of a smartphone connected to Internet through wifi, communication is lost when the user leaves the area with coverage, and a new connection must be made. However, using this new MPTCP protocol "it is possible to pass this communication to the alternate interface, so that the connection can be maintained, in addition to increasing the speed of the data transfer ", comments the expert.
Another technology proposed by these scientists is CONEX, which allows the user to be charged for the volume of congestion s/he generates rather than for the quantity of traffic he creates. It is like applying the management model used in pricing low cost airline tickets to the Internet, the researchers state. That is, if there are a lot of people who want to send data at the same time, they pay more; and vice versa. "Currently, what happens is that all users pay the same prices, so they use the Internet indiscriminately, not considering the weight of the packet, which means that the service provider must arbitrarily reject packets", explains Professor Bagnulo.
Provided by Carlos III University of Madrid
© 2012 iScience Times All rights reserved. Do not reproduce without permission.