homeDashboard network_checkThroughput test hourglass_emptyLatency test settings_ethernetConnectivity troubleshooter publicBrowser tests videocamMicrophone and webcam test cloud_offCheck if a website is down call_splitNetwork tomography diagnostics languageIP address lookup help_outlinePrivacy policy and Terms of use
Network performance measurement tools

This is a set of tools that measure the performance of your Internet connection, and diagnose various common problems. They use primarily the Network Diagnostic Test service provided by the M-Lab platform.

Please note that the measurement data, including your IP address, is logged and made publicly available for research purposes. For more information please read carefully the M-Lab privacy policy and our privacy policy and terms of use. By using this website and services, you agree to these terms.

Throughput testnetwork_check

Measures the upload and download throughput ("speed") of your Internet connection, while also monitoring its latency, detecting jitter and bufferbloat.

IP address lookuplanguage

Searches the ISP that an IP address is registered to (WHOIS), and its geographical location. It can also find your IP address.

Mic & camera testvideocam_off

Tests your microphone and webcam, checking if they are configured correctly for HTML5 audio and video input.

This is a throughput test that measures upload and download speeds, and a few other metrics such as packet loss and latency (round-trip time). It uses the Network Diagnostic Test (NDT) service provided by the M-Lab platform.

Performance metrics FAQ

I’m confused by all these graphs. Why do we need so many numbers to measure how fast an Internet connection is?

Don’t worry, analyzing network performance is a bit complicated, but it’s not hard. Unfortunately there are a lot of misconceptions floating around that make it seem more difficult than it really is. I’ll try to give a simple explanation in this document, and answer common questions.

When it comes to understanding network performance, it’s easier to think about the Internet using an analogy: the Internet is much like a road network:

  • We want to move things from a point to another. We call things data and moving them a transfer.
  • The roads are called links. The path from a source to a destination is called…a path.
  • We want moving things to be fast. This is hard to define; it’s easier to think about how it can be slow.
  • Moving things is slow when the two points are far away. We call this delay or latency; or informally, lag. If we want to move something with a round-trip (from A to B and back to A), we call this a round-trip time or RTT. Otherwise it’s called one-way delay. Latency can refer to either, depending on the context. On road networks, latency is caused primarily by the travel distance and the speed limit. In computer networks, latency is caused primarily by the travel distance and the speed of light.
  • Moving things is slow also when there is a lot of traffic and the roads are too narrow. We call the road “width” bandwidth: wider roads can carry more cars than narrow roads in a unit of time.
  • When there is more incoming traffic on a road than it can carry, we say that it is congested.
  • When a road is congested, a queue forms. We say that there is traffic queuing. A little bit of queuing is not a problem, but if it’s too long we have to wait a long time to pass a congestion point. We call this waiting time queuing delay. It’s annoying if we want to get somewhere quickly; and it’s a big problem if we want to make multiple round-trip times on the same path.
  • If we want to make quick trips often, it’s also annoying if the road is sometimes congested and other times fast, because we can’t predict how long the trip will take. We call the variation of latency jitter.
  • If a road is too congested, an evil traffic robo-cop will show up and take away all the excess cars to clear the way. Our things will not reach the destination. We call this packet loss.
  • When our things are lost on the way, we will try to send others in their place. We call this retransmission. If we have to do it often, it’s a sign of poor network performance. Useful in case somehow we didn’t notice that it takes forever to move our things.
  • When we notice that the road is congested and our things get eaten by the robo-cop, we might decide to send them a bit later when there is less traffic. This is called congestion control. Some smart people came up with a scheme to do it in a way that is relatively efficient and fair, and tries to prevent our things being eaten. They called it TCP. Other people prefer to use UDP, which lets we send what we want, when we want, but doesn’t guarantee delivery.
  • Some road owners didn’t want people to be so sad because their things get eaten by the robo-cop when the queues are too long. So they came up with the brilliant solution of building huge parking lots near intersections, so that more cars can fit there and wait together when the road is congested. Unfortunately this increased waiting times even more, keeping people upset. We call this excessive buffering on a link, or bufferbloat.

So what makes an Internet connection fast?

  • A small latency, close to the limit imposed by the speed of light. That’s a round-trip time in a few tens of milliseconds (ms) for intracontinental transfers, almost 100 ms for transatlantic round trips, and 250 ms between Europe and Australia. A small latency allows us to transfer a little bit of data quickly.
  • We also need a small jitter (variation of latency) for interactive applications such as games, remote desktop and live audio/video chat. Exactly how much is acceptable depends on the application, but less than 100 ms is considered a good target in gaming. Measuring jitter is discussed in detail in RFC 3393.
  • The jitter should be small also when our link is saturated by a large upload or download. In other words, we want no bufferbloat.
  • To transfer large amounts of data quickly, such as large file downloads, software updates or non-live video streaming, we also need a high bandwidth. Note that for technical reasons, some links are asymmetric, offering more bandwidth in one direction than the other, so we often measure download bandwidth separately from upload bandwidth. Again, actual requirements vary, but for example about 1-4 Mbps are needed to download standard definition video, 3-6 Mbps for 1080p HD and 25 Mbps for 4K UHD (Youtube, Netflix).
  • Finally, packet loss should be as small as possible. Wireless links in particular may suffer either from high packet loss or from large queuing delays and jitter (or both) in the presence of interference. High packet loss causes at least one other metric to be poor.

This is precisely what this tool measures.

You speak about bandwidth, but your tool measures throughput. What is throughput?

The throughput of a network connection is the rate at which it transmits useful data from a sender to a receiver. Is also called informally “network speed” and, incorrectly, “network bandwidth”.

The correct definition of bandwidth is the maximum rate at which raw data can be transmitted over a channel. This includes transmissions of packets that are lost in transit (wasted bandwidth), and other overheads. Thus the network bandwidth is an upper limit for the network throughput: in ideal conditions, the throughput should approach the bandwith, although it is usually a bit lower. If the throughput is around 90% of the bandwidth or better, the network is working well.

In plain English, bandwidth is what the network is capable of, throughput is what we actually get.

Both throughput and network bandwidth are measured in bits per second (note that is not the same as bytes, the typical unit for file size; 1 byte = 8 bits). But it is more common to have it reported in Mbps (megabits per second, or millions of bits per second), or Gbps (gigabits per second, or billions of bits per second). 1 Gbps = 1,000 Mbps = 1,000,000,000 bps.

If you need to calculate the transfer time for a file, you should convert from Mbps to MB/s, which is very simple: 10 Mbps = 1.2 MB/s (approximately).

Tell me more. What is the proper way to measure or calculate throughput?

There are multiple ways, depending on what exactly we want to measure. They all boil down to dividing some number of bits (or bytes) to a duration. What varies is which bits we are counting or (more rarely) which moments of time we are considering for measuring the duration.

The factors we need to take into account are:

At which layer in the network stack are we measuring throughput?

If we measure at the application layer, all that matters is what useful data we transmit to the other endpoint. For example, if we are transferring a file of 6 kB, the amount of data we count when measuring throughput is 6 kB (that is 6,000 bytes, not bits, and note the multiplier of 1000, not 1024; these conventions are common in networking).

This is usually called goodput and it may be different from what is actually sent at the transport layer (as in TCP or UDP), for two reasons:

1. Overhead due to headers

Each layer in the network adds a header to the data that introduces some overhead due to its transmission time. Moreover, the transport layer breaks the data into segments; this is because the network layer (as in IPv4 or IPv6) has a maximum packet size called MTU, typically 1,500 B in Ethernet networks. This value includes the network layer header size (e.g. the IPv4 header, which is variable in length but usually 20 B long) and the transport layer header (for TCP, it is also variable in length but usually 40 B long). This leads to a maximum segment size MSS (number of data bytes, without headers, in one segment) of 1500 - 40 - 20 = 1440 bytes.

Thus if we want to send 6 kB of application-layer data, we must break it into 6 segments, 5 of 1440 bytes each and one of 240 bytes. However at the network layer we end up sending 6 packets, 5 of 1500 bytes each and one of 300 bytes, for a total of 6.3 kB.

Here I have not considered the fact that the link layer (as in Ethernet) adds its own header and possibly also a suffix, which increases the overhead further. For Ethernet this is 14 bytes for the Ethernet header, optionally 4 bytes for VLAN tag, then a CRC of 4 bytes and a gap of 12 bytes, for a total of 36 bytes per packet.

If we consider a fixed-rate link, say of 10 Mb/s, depending on what we measure we will get a different throughput. Normally we want one of these:

The goodput, i.e. application layer throughput, if what we want to measure is application performance. For this example, we divide 6 kB by the transfer duration.

The link-layer throughput, if what we want to measure is network performance (bandwidth). For this example, we divide 6 kB + TCP overhead + IP overhead + Ethernet overhead = 6.3 kB + 6 * 36 B = 6516 B by the transfer duration.

2. Retransmission overheads

The Internet is a best-effort network, meaning that the packets will be delivered if possible, but may also be dropped. Packet drops are corrected by the transport layer, in case of TCP. For UDP, there is no such mechanism, which means that either the application does not care if some parts of the data do not get delivered, or the application implements retransmission itself on top of UDP.

Retransmissions reduce goodput for two reasons:

a. Some data needs to be sent again, which takes time. This introduces a delay which is inversely proportional to the rate of the slowest link in the network between the sender and the receiver (a.k.a the bottleneck link).

b. Detecting that some data was not delivered needs feedback from the receiver to the sender. Due to propagation delays (sometimes called latency; caused by the finite speed of light in the cable), feedback can only be received by the sender with some latency, which slows down the transmission even more. In most practical cases, this is the most significant contribution to the extra delay caused by the retransmission.

Clearly, if we use UDP instead of TCP and we do not care about packet loss, we will of course get better performance. But for many applications, data loss cannot be tolerated, in which case the measurement is meaningless.

There are some applications that do use UDP for transferring data. One is BitTorrent, which may use either TCP or a protocol they designed called uTP (Changing the game with μTP, BEP0029), which emulates TCP on top of UDP, but aims at being more efficient with many parallel connections. Another transport protocol implemented over UDP is Google’s QUIC (ref, blog), which also emulates TCP and offers multiplexing multiple parallel transfers over a single connection, and uses forward error correction to reduce retransmissions.

I will discuss forward error correction (slides) a little since it is related to throughput. A naive way of implementing it is by sending every packet twice: in case one gets lost, the other still has a chance of being received. This reduces the amount of retransmissions to half, but also halves the goodput since we send redundant data (note that the network bandwidth or link layer throughput remains the same!). In some cases this is fine; especially if the latency is very large, such as on intercontinental or satellite links. Moreover, some mathematical methods exist where we don’t have to send a full copy of the data; for instance for every n packets we send, we send another reduntant one which is the XOR (or some other arithmetic operation) of them; if the redundant one gets lost, it doesn’t matter; if one of the n packets gets lost, we can reconstruct it based on the redundant one and the other n-1. We can thus configure the overhead introduced by forward error correction to whatever amount of bandwidth we can spare. We trade off some throughput to reduce latency.

TCP traffic, including HTTP, does not use forward error correction. However WebRTC, which is used in HTML5 audio/video/peer connections, does.

How are we measuring the transfer time?

Is the transfer completed when the sender finished sending the last bit over the wire, or does it also include the time it takes for the last bit to travel to the receiver? Additionally, does it include the time it takes to get a confirmation from the receiver, stating that all data has been received successfully and no retransmission is needed?

It really depends on what we want to measure. Note that for large transfers, one extra round-trip-time is insignificant in most cases (unless we are communicating, for instance, with a probe on Mars). So usually it is included in the calculation.

Putting it all together, how does this test work?

The measurement method is described here, and the system architecture here. Note that I'm not affiliated to either of these projects. This website is a third party frontend with, I hope, a friendlier interface.

What is that key feature in TCP that makes it have much much higher throughput than UDP?

This is not true, although a common misconception.

In addition to retransmitting data when needed, TCP will also adjust its sending rate so that it will not cause packet drops by congesting the network. The adjustment algorithm has been perfected over decades, and usually converges quickly to the maximum rate supported by the network (actually, the bottleneck link). For this reason it is usually difficult to beat TCP in throughput.

With UDP, there is no rate limiting at the sender. UDP lets the application send as much as it wants. But if we try to send more than the network can handle, some of the data will be dropped, lowering the throughput, and also making the admin of the network we are congesting very angry. This means that sending UDP traffic at high rates is impractical (unless the goal is to DoS a network).

Some media applications are using UDP but rate-limiting the transfer at the sender at a very small rate. This is typically used in VoIP applications or Internet Radio, which require very little throughput but low latency. I suppose this is one of the reasons for the misconception that UDP is slower than TCP; that is not the case, UDP can be as fast as the network allows.

As I said before, there are protocols such as uTP or QUIC, implemented over UDP, which achieve performance similar to TCP.

I heard that UDP is much faster than TCP?

See above.

Throughput calculation formula for TCP?

TCP throughput = TCP Window Size / RTT
            

Without packet loss (and retransmissions), this is correct.

TCP throughput = BDP / RTT = (Link Speed in Bytes/sec * RTT)/RTT = Link Speed in Bytes/sec
            

This is correct only if the window size is configured to the optimal value. BDP/RTT is the optimal (maximum possible) transfer rate in the network. Most modern operating systems should be able to auto-configure it optimally.

From the formula, it seems that throughput is limited by the TCP window size. Is this true?

If the TCP window size is smaller than BDP, then the throughput will be indeed suboptimal (because we waste time waiting for ACKs instead of sending more data). If it is equal or higher to the BDP, then we achieve optimal throughput. But modern operating systems increase the window size automatically so that it is not limiting throughput, so in practice it should not happen.

Is throughput limited by the latency (or RTT)?

The answer is that…it’s complicated.

Normally, no matter how large the RTT is, we can increase the window size until the throughput approaches the bandwidth. However, there are situations in which a large RTT is still limiting throughput.

One scenario is when the amount of data to transfer is very small. In this case a large window does not help, since we don’t have enough data to fully utilize it. Unfortunately, this is quite common for Web traffic, where most files (such as styles or JavaScript) are small. Thus a large RTT hurts performance, increasing website loading times. On the other hand, transferring large files or video (but maybe not real-time chat!) should still have good performance.

Another ill effect occurs when a new TCP connection is created: TCP always starts with a handshake followed by a small initial window in slow start. With a large RTT, the early phase of the transfer is slow; the effect is worse on HTTPS (SSL) connections. While there have been efforts to address this problem (TCP Fast Open, SPDY, HTTP/2), unless we invent faster-than-light communication, it will never be solved completely.

Finally, a more complex problematic scenario involves multiple TCP flows sharing the same bottleneck link. While parallel TCP flows normally share bandwidth with each other approximately equally, which is called fairness, if a TCP flow has a higher RTT than the others, it will end up with a smaller throughput ref1, ref2.

Options FAQ
Server
Selected
Selection mode
Location
Network
Latencyhourglass_empty

Initializing...
Uploadfile_upload

Initializing...
Downloadfile_download

Initializing...
Download statistics for

Upload statistics for

This is a tool that tests your microphone and webcam. The audio and video are not sent over the network.

Once you click 'Start', a webcam and microphone capture should appear below, with both of them played back to you. If you don't see anything, your webcam or microphone are probably not configured correctly.

Cameravideocam
Microphonemic

This is a tool that, given an IP address, searches for:

  • the name of the ISP the IP address is registered to (WHOIS lookup);
  • its geographical location (geolocation lookup).
It uses the RADb and the MaxMind GeoLite2 databases. Note that WHOIS and geolocation information is sometimes inaccurate.

Resultslanguage
IP address
Network
Description
AS number
AS name
AS description
Country code
Country
Maplanguage

Privacy Policy

This is a collection of tools created by researchers to gather information relevant to the performance of Internet connection services.

The goals of the project are:

  • Providing users with hopefully useful network performance measurement tools;
  • Collecting and analyzing information regarding Internet connection services;
  • Reporting collected information to the public.

By accessing the tools, you will generate and send some data back-and-forth with one or more measurement servers. The servers collect data related to the particular communication flows generated by the tests between the client and the servers. This data includes your computer's IP address and the date and time when the test was executed. This data will be used and analyzed by researchers and, in order to advance research, it will be made publicly available. The tools do not collect personal information, such as your Internet traffic, your name, your emails or Web searches.

We may also automatically collect the following information about your use of our site or services through cookies, web beacons, and other technologies: your domain name; your browser type and operating system; web pages on this website you view; links on this website you click; your IP address; the length of time you visit our site and or use our services; and the referring URL, or the webpage that led you to our site, and the following: access time, browser type, domain name, IP address, page views and referring URL.

Cookies

We use cookies and other tracking mechanisms (such as local storage) to track information about your use of our site or services. These allow us to enhance user experience, for example by displaying a history of your previous performance test results.

Third parties

This website uses software, services and infrastructure from the following third parties, which may use cookies and other tracking mechanisms and may collect information about you:

M-Lab

M-Lab is an open, distributed server platform on which researchers can deploy open source Internet measurement tools. The data collected by those tools is released in the public domain. The goal of M-Lab is to advance network research and empower the public with useful information about their broadband and mobile connections. By enhancing Internet transparency, M-Lab helps sustain a healthy, innovative Internet.

Several tools access the M-Lab servers, which collect data. Please read carefully the M-Lab privacy policy. You can also find out exactly what data is collected and how to access it here.

Google Analytics

We use Google Analytics to evaluate usage of our site and help us improve our services, performance and user experiences. You may find out more about Google Analytics here.

Terms of Use

By using these services in any manner, including but not limited to visiting, browsing and using the various tools, you agree to these Terms of Use and all other operating rules, policies and procedures that may be published from time to time on the services, each of which is incorporated by reference and each of which may be updated from time to time without notice to you.

We reserve the right, at our sole discretion, to modify or replace any of these Terms of Use, or change, suspend, or discontinue the services (including without limitation, the availability of any feature, database, or content) at any time. We may also impose limits on certain features of the services or restrict your access to parts or all of the services without notice or liability. It is your responsibility to check these Terms of Use periodically for changes. Your use of the services following the posting of any changes to these Terms of Use constitutes acceptance of those changes.

These services are provided by the authors and contributors ''as is'' and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the authors or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this software, even if advised of the possibility of such damage.

Any part of the data measured or reported by the tools may be inaccurate.

Users are not allowed to use automated scripts of any kind to access this website or services, without our express permission.

Users are not allowed to deploy data-mining, web-mirroring or web-copying of any kind against the site. Any such effort will be considered an attempt to duplicate copyrighted data, and/or to destabilize the operation of the website. If you want to obtain the collected data, you may use the M-Lab open-access datasets instead. If you want to obtain the client or server-side source code, you may download it from our git repository or contact us.

Portions of the software are based on the code of the Web100 Network Diagnostic Tool (NDT); the software, with our modifications, is available in source code form in this git repository under the original NDT license.

Contact

You may contact us at contact at netperf.tools