UA Tracker

Tracking user-agents across the web.


Frequently Asked Questions

What is it?
What's it for?
Why do you want to do that?
Where is this text file you speak of?
How does it work?
Sounds shady. What other data do you collect?
Let me rephrase the question. What data don't you collect?
How can I help?


What is it?

UA Tracker is a user-agent tracking program.


What's it for?

UA Tracker logs the user-agent from a visitor's browser to a database, counts the number of times it has seen that unique user-agent, and generates a plain-text file containing all the unique user-agents in the database.

I recently updated the site to evaluate certain types of hits to help determine if the visitor is a web crawler or bot. Those results are stored in a different database.


Why do you want do do that?

Curiosity mostly. I spent many hours searching the web for statistical data about user-agents. I found a few webpages containing tables of unique user agents, but nothing that tracks the data in real-time. I wanted a plain-text file with a simple list of user-agents, and I didn't want to write a program to parse html tables. Since I was unable to find a simple plain-text list, I thought it would be fun to create a webpage to track the statistics real-time and generate my own text file.


Where is this text file you speak of?

You can download it here


How does it work?

When a browser connects to a web server, it sends HTTP request headers to the server. These identify the browser to the web server. Webmasters often use them to customize their web applications to the browser type.

When a browser loads the UA Tracker image, it must connect to UA Tracker's host server. The UA Tracker software then collects the user-agent string from the HTTP request headers and checks the database for that unique string. If it does not exist in the database, the user-agent string is added to the database and the text file.


Sounds shady. What other data do you collect?

I collect user-agents and referrers. Referrers are validated to ensure the referring website actually has a linkback. All other referrers are discarded. User-agents can be viewed here, and referrers can be viewed here.

With the addition of web crawler and bot tracking, I started collecting IP addresses. The IP addresses are only collected for web crawlers and bots to cross-reference with reverse DNS data for validation. Validating the IP addresses for web crawlers and bots allows me to remove strings that are improperly flagged.


Let me rephrase the question. What data don't you collect?

I don't collect IP addresses for standard web browsers, hostnames, connection types, languages, encoding, character sets, or cookies.


How can I help?

Easy! I have developed a script to track the user-agent from a simple icon. If you have a webpage, and you would like to contribute to the UA Tracker project, you can paste the following code into any webpage:

<a href="http://www.ua-tracker.com" target="_blank"><img src="http://www.ua-tracker.com/image.png" alt="UA Tracker - Tracking user-agents across the web." title="UA Tracker - Tracking user-agents across the web." /></a>


Valid HTML 4.01 Transitional Valid CSS! Created by Scott J. LeCompte
Copyright © 2007 UA Tracker All rights reserved.