Overview

This script is designed to help you test the performance of your proxies by measuring latency and tracking the success rate of requests. It supports SOCKS5 and HTTPS proxies and uses ip-api.com as the default target URL to test. You can also test other endpoints like Cloudflare’s trace URL to benchmark proxy performance.


What You’ll Learn

  • How to use the script to test latency and success rate across multiple proxy requests.
  • How to configure the proxy protocol (SOCKS5 or HTTP).
  • How to run the script using rotating or sticky ports.
  • How to analyze the test results, including latency statistics and status code breakdown.

Default Settings

  • Protocol: Use SOCKS5 = false or HTTPS as required.
  • Target: ip-api.com – our baseline for IP information.
  • Concurrency: 5-10 threads (adjustable in the script)
  • Testing Volume: At least 2000 requests for accuracy

Variable Setting

The script allow you to set the following variables:

  • Target Country: (User-defined; leave blank ( do not append -country- ) for a random pool)
  • Protocol: Choose between HTTP, HTTPS, or SOCKS5 ( default is HTTP/HTTPS )
  • Session Type: Rotating (default)
  • Target URL: The URL to test against (default is set to ip-api.com)
  • Number of Requests: Adjust for your testing needs
    Recommendation: For accurate results, run at least 2000 requests and use 5-10 threads.
The test results can vary depending on the Gateway you are choosing as the proxy address. We currently offer 3 gateways location.

Port Ranges

  • Rotating Ports: Rotating ports will change your IP on every request sent.

    • HTTP/HTTPS: 9000 - 9010
    • SOCKS5: 11000 - 11010
  • Sticky Ports: Sticky ports enable you to maintain the same IP for any duration that you assign on the dashboard.

    • HTTP/HTTPS: 10000 - 10900
    • SOCKS5: 12000 - 12010

Steps: Steps: To Test the Latency and Success Rate of Your Proxies

Step 1: Installing Required Libraries

Before running the script, you need to install the required Python libraries. Run the following command to install them:

pip install requests matplotlib numpy
  • requests: To send HTTP requests.
  • matplotlib: To plot the test results.
  • numpy: To calculate statistical values.
  • collections: To count occurrences of error messages.

Step 2: Configuring Proxy Settings

The script supports both SOCKS5 and HTTPS proxies. You can configure which one to use by setting the use_socks5 flag to True or False.

SOCKS5 Proxy Example:

If you need to use a SOCKS5 proxy, set use_socks5 = True:

use_socks5 = True

# SOCKS5 Proxy Settings
proxies = {
    'http': 'socks5://username:password@proxy.geonode.io:11009',
    'https': 'socks5://username:password@proxy.geonode.io:11009'
}

HTTP Proxy Example:

For HTTP proxies, set use_socks5 = False:

use_socks5 = False

# HTTP Proxy Settings
proxies = {
    'http': 'http://username:password@proxy.geonode.io:9008',
    'https': 'http://username:password@proxy.geonode.io:9008'
}

Make sure to replace username and password with your actual credentials.


Step 3: Configuring Test Parameters

The script allows you to configure various parameters for the proxy testing. You can change the target country, target URL, number of requests, and other settings.

Customizing Test Settings:

url = 'http://ip-api.com'  # Default test target; can be replaced
num_requests = 5000        # Number of requests to send
num_workers = 5            # Number of concurrent threads

For accurate results, it’s recommended to run at least 2000 requests using 5-10 threads. You can adjust the number of requests or workers based on your needs.


Step 4: Sending Concurrent Requests

The script uses the ThreadPoolExecutor to send requests concurrently. Each request is timed, and the status code and latency are recorded.

from concurrent.futures import ThreadPoolExecutor

def fetch_url(i):
    try:
        start_time = time.time()  # Start the timer
        response = requests.get(url, proxies=proxies, timeout=60)  # Send the request
        latency = time.time() - start_time  # Calculate latency
        return response.status_code, latency
    except requests.exceptions.Timeout:
        return 'Timeout', time.time() - start_time
    except requests.exceptions.RequestException as e:
        return 'Error', time.time() - start_time

The function fetch_url(i) handles sending each request and logs either a successful status code or an error (timeout or other exceptions).


Step 5: Analyzing and Visualizing Results

After the requests have been completed, the script processes the results and visualizes them in a graph. The graph shows latency across all requests, with color-coded markers for different status codes (200, timeout, error).

Graph Example:

The following graph is an example of the output you will receive after running the script:

  • Blue markers: Successful requests (Status 200).
  • Red markers: Requests with non-200 status codes (e.g., 404, 500).
  • Green markers: Requests that timed out.
  • Magenta markers: Requests with other errors.

The graph provides a visual overview of the performance and reliability of the proxy.


Step 6: Displaying Test Results

The script calculates statistical data about the test results, such as the average latency, median latency, and standard deviation. It also counts the number of successful requests (Status 200) and logs any errors or timeouts.

Example Output:

Here is an example of the statistics that will be displayed after the test:

Average Latency: 1.20 seconds
Median Latency: 0.89 seconds
Standard Deviation of Latency: 1.30 seconds

Status Code Percentages:
200: 99.54%
Error: 0.12%
401: 0.04%
402: 0.06%
500: 0.20%
502: 0.04%

Total Requests: 5000
Successful (Status 200): 4977
Timeouts: 0
Other Errors: 6

Error Messages:
Error: 6

These results provide insight into the proxy’s performance, including success rates, latency, and any issues that occurred during testing.


Step 7: Interpreting the Results

Based on the output, you can analyze the performance of the proxy:

  • Success Rate: The percentage of successful requests (Status 200) versus failed requests.
  • Latency: The average time taken for requests to complete.
  • Error Distribution: The breakdown of errors, such as timeouts or other issues.

The rotating port configuration helps provide more accurate results by testing a range of IP addresses rather than relying on a single proxy IP.


Source Code

You can find the full source code for this script on GitHub at the following link: https://github.com/geonodecom/proxy-testing-toolkit/tree/performance-testing/success-latency


FAQs