Latency: Definition, Measurement and Testing

(Image credit: Getty Images) Jump to:

Latency is a technical term that refers to the time it takes for data to travel from one place to another. You can measure it using a ping, where your computer sends a small packet of data to a server, which in turn sends it back, and you record how long it takes.

According to information from computer hardware manufacturer Apposite Technologies, latency depends on three factors: the speed at which data is physically transmitted over the network, the route it takes, and whether it needs to wait in a queue.

Measuring Latency

Measured in milliseconds, latency is often referred to as “round trip time” (RTT), as Frontier mentions. RTT is the time it takes for a data packet to travel from one point on the network to another. An alternative and less common term for latency is “time to first byte” (TTFB), which refers to the time between the first part of a data packet being sent from a particular point on the network and its arrival at its final destination.

Network speed is a major issue for satellite internet. Most communications satellites are in geostationary orbit, located 22,300 miles (35,900 km) above the Earth, according to Space.com. In order for data to get from your computer to the server and back, it must travel this long route four times.

SpaceX founder and CEO Elon Musk unveiled the concept, called Starlink, in January 2015, explaining that the company planned to launch about 4,000 broadband satellites into low-Earth orbit to provide affordable internet. By comparison, there are currently about 2,000 operational satellites in orbit, and about 9,000 have been sent into space throughout human history.

In terms of latency, Starlink aims to significantly reduce the RTT of data packets, minimizing lag. This should make high-speed activities like streaming and gaming possible virtually anywhere in the world.

Reduced data transfer rate

The idea of creating a network of geographically distant computers was first proposed in the 1960s by MIT computer scientist J.C.R. Licklider in his theoretical paper on interactive real-time computing entitled “Man-Computer Symbiosis.”

According to Scientific American, the first version of ARPANET was limited to just a few nodes in the United States, but the development of packet switching and TCP/IP protocols (the languages of Internet communication) in the 1970s opened the door to global expansion of the network.

Email had been around since the 1980s, but it was after Tim Berners-Lee introduced the World Wide Web in the early 1990s that the Internet began to expand beyond research and government agencies. Since then, improvements in data speeds have allowed people and organizations to store and access ever-increasing amounts of information and send larger files at higher speeds from anywhere in the world.

Currently, fast Internet access is available only in places with fiber-optic cables, and fiber-optic Internet is 20 times faster than cable Internet, according to software and computer maker HP. In remote areas, communications satellites provide Internet access, but such connections are usually slow.

Latency testing

Sourse: www.livescience.com

Leave a Reply

Your email address will not be published. Required fields are marked *