URL normalization is the process by which URLs are modified and standardized. The goal of the normalization process is to transform a URL into a normalized URL so it is possible to determine if two syntactically different URLs may be equivalent.
Search engines employ Normalization in order to assign importance to web pages [ clarify ] and to reduce indexing of duplicate pages. Web crawlers perform URL normalization in order to avoid crawling the same resource more than once. Web browsers may have been visited if a page has been visited .
There are several types of normalization that can be performed. Some of them are always being preserved and some may not be.
Normalizations that preserve semantics
The following normalizations are described in RFC 3986  to result in equivalent URLs:
- Converting the scheme and host to lower case. The scheme and host components of the URL are case-insensitive . Most normalizers will convert to lowercase. example:
- Capitalizing letters in escape sequences. All letters within a percent-encoding triplet (eg, “% 3A”) are case-insensitive, and should be capitalized. example:
- Decoding percent-encoded bytes of unreserved characters. For consistency, percent-encoded bytes in the ranges of ALPHA (
%7A), DIGIT (
%39), hyphen (
%2D), period (
%2E), underscore (
%5F), or tilde (
%7E) should not be created by URI in a URI, should be decoded to their corresponding unreserved characters by URI normalizers.  Example:
- Removing the default port. The default port (port 80 for the “http” scheme) can be removed from (or added to) a URL. example:
Normalizations that usually preserve semantics
For http and https URLs, The Following normalizations listed in RFC 3986 May result in equivalent URLs purpose are not guaranteed to by the standard:
- Adding trailing / Directories [ clarified ] are indicated with a trailing slash and should be included in URLs. example:
- However, there is no way to know if a path path represents a directory or not. RFC 3986 notes that if the URL forms a URL, then that is an indication that they are equivalent.
- Removing dot-segments. The segments “..” and “.” Can be removed from a URL according to the algorithm described in RFC 3986 (or a similar algorithm). example:
- However, if removed ”
..” component, eg ”
b/..“, is a symlink to a directory with a different parent, eliding ”
b/..” will result in a different path and URL.  In rare cases depending on the web server, this may be true for the root directory (eg ”
//www.example.com/..” may not be equivalent to ”
Normalizations that change semantics
Applying the following normalizations results in a different way
- Removing directory index. Default directory indexes are not needed in URLs. examples:
- Removing the fragment. The fragment component of a URL is never seen by the server and can sometimes be removed. example:
- However, AJAX applications frequently use the value in the fragment.
- IP Replacing with domain name. Check if the IP address maps to a domain name. example:
- The reverse replacement Rarely is safe due to virtual web servers .
- Limiting protocols. Limiting different application layer protocols. For example, the “https” scheme could be replaced with “http”. example:
- Removing duplicate slashes Paths which include two adjacent slashes could be converted to one. example:
- Removing or adding “www” as the first domain label. Some websites operate identically in two Internet domains: one Whose least significant label is “www” Reviews and another Whose name is the result of omitting the least significant label from the name of the first, the lath being white Known as a naked domain . For example,
http://www.example.com/may access the same website. Many websites redirectthe user from the www to the non-www address or vice versa. A normalizer may determine these URLs appropriately. example:
- Sorting the query parameters. Some web pages use more than one query parameter in the URL. A normalizer can sort the parameters into alphabetical order (with their values), and reassemble the URL. example:
- However, the order of parameters in a URL may be significant and may have the same variable to appear multiple times. 
- Removing unused query variables. A page may only expect certain parameters to appear in the query; unused parameters can be removed. example:
- Note that a parameter without a value is not necessarily an unused parameter.
- Removing default query parameters. A default value in the query string may be identically example:
- Removing the “?” when the query is empty. When the query is empty, there may be no need for the “?” example:
Normalization based on URL lists
Some normalization rules may be developed for specific websites. For example, if the URL
appears in a crawl log several times along with
we may assume that the two URLs are equivalent to one of the URL forms.
Schonfeld et al. (2006) presents a heuristic called DustBuster for detecting DUST (different URLs with similar text) that can be applied to URL lists. They showed that the correct rules were found and applied with a normalization algorithm, they were able to find a 68% of the redundant URLs in a URL list.
- Uniform Resource Locator
- Fragment identify
- Web crawler
- Jump up^ RFC 3986, Section 6: Normalization and Comparison
- Jump up^ RFC 3986, Section 2.3 .: Unreserved Characters
- Jump up^ “Secure Coding in C and C ++” (PDF) . Securecoding.cert.org . Retrieved 2013-08-24 .
- Jump up^ “jQuery 1.4 $ .param demystified” . Ben Alman. 2009-12-20 . Retrieved 2013-08-24 .