Many people, maybe including you have heard of this term. But, what is meant by latency? Latency is the time it takes data from its origin to its destination. Latency is measured in milliseconds. In layman’s terms, latency is no different from delay or delay.
In terms of network latency, this can be determined by the time it takes for the request to travel from the sender to the receiver. And the recipient will process the request. In other words, the travel time from the browser to the server.
Let’s take an example when you, who are in Bali, send an email with an attachment in it. Whether it’s a document or a photo. And you send the email to your colleague who is in Osaka, Japan. When an email is sent, there will be latency of a fraction of a millisecond before the email is received.
Latency that occurs in a matter of fractions of a second, you could say is still quite reasonable. But if it is in seconds, then it is high latency. There must be some action to solve the high latency problem. Starting from looking for the cause of latency. You can read more about latency if you want to solve the problem.
Latency can be reduced by using several different techniques. Reducing the amount of server latency will help your web resources load faster, thereby increasing overall page loading times.
Using Content Delivery Network
Using a content delivery network can help bring resources closer to users by cached them in multiple locations around the world. Once those resources are cached, user requests only need to travel to the nearest Point of Presence to retrieve the data.
Using the Pre-fetch Method
Fetching web resources first doesn’t necessarily reduce the amount of latency, but increases the perceived performance of your website. By implementing prefetch, latency occurs in the background when a user browses a particular page.
Another type of caching that can be used to reduce latency is browser cache. The browser will save certain resources from the website locally, to help increase latency time and reduce the number of requests returning to the server.
How to Calculate Latency
Please note, that fiber optic cable has internet speed equivalent to the speed of light 299,792,458 meters/second. From this speed, the latency is 3.33 microseconds. The farther the location of sending data packets, the greater the latency number. Basically, latency is measured using two methods, namely Round Trip Time (RTT) and Time To First Byte (TTFB).
To calculate latency, you can use tools to measure Time To First Byte in 16 different server locations. The following is how to measure latency using the Time To First Byte method.
The first step, please open the website tools.keycdn.com/performance. Then, please enter your website URL. Then do “Test”. Then wait for the results. Please look at the last column of TTFB. There, it will be written latency resulting from access to different locations. The smaller the latency, the better.
Latency vs Bandwidth
Although latency and bandwidth have the same impression, they have different meanings. It’s easier to visualize how each term works on how the water pipe works. Bandwidth determines how narrow or wide the pipe is. The narrower it is, the less data can be pushed through it and vice versa. Whereas latency determines how quickly the content in the pipeline can be transferred from the client to the server and vice versa.