This paper updates previous work on fitting traffic profiles. We use more modern statistical techniques to question (and refute) previous assumptions about heavy tails in statistics. In this case we believe that the best fit for traffic volume per unit time is the log-normal distribution. Tail distributions an have big impacts for capacity planning and for prediction of pricing (say 95th percentile).
This paper describes a C# library that can be used to build networked programs which can compile to several target hardware and software platforms. This greatly eases development and debugging. The system is tested using NetFPGA as a target and performs almost as well as hand tuned code.
This paper describes a system for middleboxes that process application level data -- that is reconstructed TCP flows not packets. The system consists of three parts:
1) A language specific to middleboxes that can quickly express data formats and how to process them but in a "safe" way that allows middleboxes to co-exist on the same physical hardware.
2) An abstraction, the task graph, that breaks middlebox logic into small, parallelisable logical units (tasks) connected by channels through which data flows.
3) A system that allows the compiled code to execute in a performant way.
This paper looks at a new way to use multiple channels in ad-hoc sensor networks. It consists of two parts:
1) A protocol that allows a node, when sent a particular message, to attempt to change channel (reliably with a fall-back if the new channel is subject to interference).
2) An algorithm run at a single "command" node that selects which nodes should change channel according to a graph colouring problem.
The work is tested in simulation using Cooja (which simulates Contiki based sensor nodes).
This paper looks at when TCP is "not" TCP by analysis of five years of data on a Japanese data set. That is to say, when TCP throughput is limited by mechanisms other than traditional TCP rate control (loss or delay in the network feedback causing a reduction in window size).
Other mechanisms are important:
1) Application limiting where the sender "dribbles" out data more slowly, for example in the way that you tube does, to reduce their bandwidth.
2) Window size limitations -- where hosts have an OS built in limitation on how large the TCP window can be.
3) Middle box/receiver window tweaking -- where the receiver or (more likely) a middle box tweaks the advertised window size to reduce throughput.
It is found that in the traces studied these three mechanisms account for more than half the packets. The traces include data from well known sites such as YouTube and it seems likely that the findings are more general than just applicability to this particular trace set.
In general this paper finds that TCP in the wild is not behaving in the way it is traditionally taught... by a variety of mechanisms, TCP is not "filling a pipe" and "controlled by loss"... other mechanisms are at play beyond traditional TCP congestion control.
This paper analyses a large number of measurements of round trip times collected from DNS servers and looks at how the measurements vary across continents.
This paper looks at a mechanism related to Explicit Congestion Notification. It uses a single bit in the IP header to communicate the congestion at each hop in the path. Statistical estimators are used to work out the accuracy of the congestion estimation.
This paper creates a simple mathematical model based on Markov chains which can model (with some simple assumptions) the type of cacheing trees seen in content centric networking. The model is tested with some simulation results.
This paper considers the problem of balancing traffic across network egresses. It achieves a workable solution using a scalable packet market scheme which couples end-hosts controlling their own connection with an overall controller which can select routes appropriately for each flow.
The flows are balanced to seek paths which minimise loss.