1

Saturday, August 04, 2007

Getting A New Free Domain (Sở hữu tên miền ngắn cho Blog của bạn)

FOREX Glossary

Here are some of the most common terms used in FOREX trading.

Ask Price – Sometimes called the Offer Price, this is the market price for traders to buy currencies. Ask Prices are shown on the right side of a quote – e.g. EUR/USD 1.1965 / 68 – means that one euro can be bought for 1.1968 UD dollars.

Bar Chart – A type of chart used in Technical Analysis. Each time division on the chart is displayed as a vertical bar which show the following information – the top of the bar is the high price, the bottom of the bar is the low price, the horizontal line on the left of the bar shows the opening price and the horizontal line on the right of bar shows the closing price.

Base Currency – is the first currency in a currency pair. A quote shows how much the base currency is worth in the quote (second) currency. For example, in the quote - USD/JPY 112.13 – US dollars are the base currency, with 1 US dollar being worth 112.13 Japanese yen.

Bid Price – is the price a trader can sell currencies. The Bid Price is shown on the left side of a quote - e.g. EUR/USD 1.1965 / 68 – means that one euro can be sold for 1.1965 UD dollars.

Bid/Ask Spread – is the difference between the bid price and the ask price in any currency quotation. The spread represents the broker's fee, and varies from broker to broker.

Broker – the intermediary between buyer and seller. Most FOREX brokers are associated with large financial institutions and earn money by setting a spread between bid and ask prices.

Candlestick Chart - A type of chart used in Technical Analysis. Each time division on the chart is displayed as a candlestick – a red or green vertical bar with extensions above and below the candlestick body. The top of the extension shows the highest price for the chart division and the bottom of the extension shows the lowest price. Red candlesticks indicate a lower closing price than opening price, and green candlesticks indicate the price is rising.

Cross Currency – A currency pair that does not include US dollars – e.g. EUR/GBP.

Currency Pair – Two currencies involved in a FOREX transaction – e.g. EUR/USD.

Economic Indicator – A statistical report issued by governments or academic institutions indicating economic conditions within a country.

First In First Out (FIFO) – refers to the order open orders are liquidated. The first orders to be liquidated are the first that were opened.

Foreign Exchange (FOREX, FX) – Simultaneously buying one currency and selling another.

Fundamental Analysis – Analysis of political and economic conditions that can affect currency prices.

Leverage or Margin – The ratio of the value of a transaction to the required deposit. A common margin for FOREX trading is 100:1 – you can trade currency worth 100 times the amount of your deposit.

Limit Order – An order to buy or sell when the price reaches a specified level.

Lot – The size of a FOREX transaction. Standard lots are worth about 100,000 US dollars.

Major Currency – The euro, German mark, Swiss franc, British pound, and the Japanese yen are the major currencies.

Minor Currency – The Canadian dollar, the Australian dollar, and the New Zealand dollar are the minor currencies.

One Cancels the Other (OCO) – Two orders placed simultaneously with instructions to cancel the second order on execution of the first.

Open Position – An active trade that has not been closed.

Pips or Points – The smallest unit a currency can be traded in.

Quote Currency – The second currency in a currency pair. In the currency pair USD/EUR the euro is the quote currency.

Rollover – Extending the settlement time of spot deals to the current delivery date. The cost of rollover is calculated using swap points based on interest rate differentials.

Technical Analysis – Analysis of historical market data to predict future movements in the market.

Tick – The minimum change in price.

Transaction Cost – The cost of a FOREX transaction – typically the spread between bid and ask prices.

Volatility – A statistical measure indicating the tendency of sharp price movements within a period of time.

Thanks for visiting my blog, subscribe to my RSS feed. Thanks for visiting!
Getting A New Free Domain (Sở hữu tên miền ngắn cho Blog của bạn)

Keyword stuffing

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Keyword stuffing is considered to be an unethical search engine optimization (SEO) technique. Keyword stuffing occurs when a web page is loaded with keywords in the meta tags or in content. The repetition of words in meta tags may explain why many search engines no longer use these tags.

Keyword stuffing is used to obtain maximum search engine ranking and visibility for particular phrases. A word that is repeated too often may raise a red flag to search engines. In particular, Google has been known to delist sites employing this technique, and their indexing algorithm specifically lowers the ranking of sites that do this.

Hiding text out of view of the visitor is done in many different ways. Text colored to blend with the background, CSS "Z" positioning to place text "behind" an image – and therefore out of view of the visitor – and CSS absolute positioning to have the text positioned far from the page center, are all common techniques. As of 2005, some of these invisible text techniques can be detected by major search engines.

"Noscript" tags are another way to place hidden content within a page. While they are a valid optimization method for displaying an alternative representation of scripted content, they may be abused, since search engines may index content that is invisible to most visitors.

Inserted text sometimes includes words that are frequently searched (such as "sex"), even if those terms bear little connection to the content of a page, in order to attract traffic to advert-driven pages.

Keyword stuffing can be considered to be either a white hat or a black hat tactic, depending on the context of the technique, and the opinion of the person judging it. While a great deal of keyword stuffing is employed to aid in spamdexing, which is of little benefit to the user, keyword stuffing in certain circumstances is designed to benefit the user and not skew results in a deceptive manner. Whether the term carries a pejorative or neutral connotation is dependent on whether the practice is used to pollute the results with pages of little relevance, or to direct traffic to a page of relevance that would have otherwise been de-emphasized due to the search engine's inability to interpret and understand related ideas.

See also

[edit] External links


This World Wide Web-related article is a stub. You can help Wikipedia by expanding it.

Thanks for visiting my blog, subscribe to my RSS feed. Thanks for visiting!
Getting A New Free Domain (Sở hữu tên miền ngắn cho Blog của bạn)

Shopping cart software

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Shopping cart software is software used in e-commerce to assist people making purchases online, analogous to the American English term 'shopping cart'. In British English it is generally known as a shopping basket, almost exclusively shortened on websites to 'basket'.

The software allows online shopping customers to place items in the cart. Upon checkout, the software typically calculates a total for the order, including shipping and handling (i.e. postage and packing) charges and the associated taxes, as applicable.

Contents

[hide]

Technical definition

These applications typically provide a means of capturing a client's payment information, but in the case of a credit card they rely on the software module of the secure gateway provider, in conjunction with the secure payment gateway, in order to conduct secure credit card transactions online.

Some setup must be done in the HTML code of the website, and the shopping cart software must be installed on the server which hosts the site, or on the secure server which accepts sensitive ordering information. E-shopping carts are usually implemented using HTTP cookies or query strings. In most server based implementations however, data related to the shopping cart is kept in the Session object and is accessed and manipulated on the fly, as the user selects different items from the cart. Later at the process of commit, the information is accessed and an order is generated against the selected item thus clearing the shopping cart.

Although the most simple shopping carts strictly allow for an item to be added to a basket to start a checkout process (e.g. the free PayPal shopping cart), most shopping cart software actually provides additional features that an Internet merchant uses to fully manage an online store. Data (products, categories, discounts, orders, customers, etc.) is normally stored in a database and accessed in real time by the software.

Components

Shopping cart software typically consists of two components:

Storefront: the area of the Web store that is accessed by visitors to the online shop. Category, product, and other pages (e.g. search, best sellers, etc.) are dynamically generated by the software based on the information saved in the store database.

Administration: the area of the Web store that is accessed by the merchant to manage the online shop. The amount of store management features changes depending on the sophistication of the shopping cart software, but in general a store manager is able to add and edit products, categories, discounts, shipping and payment settings, etc. Order management features are also included in many shopping cart programs.

Licensed vs. Hosted options

Shopping cart software can be generally categorized into two categories.

Licensed software: The software is downloaded and then installed on a Web server. This is most often associated with a one-time fee, although there are many free products available as well. The main advantages of this option are that the merchant owns a license and therefore can host it on any Web server that meets the server requirements, and that the source code can often be accessed and edited to customize the application.

Hosted service: The software is never downloaded, but rather is provided by a hosted service provider and is generally paid for on a monthly/annual basis; also known as the application service provider (ASP) software model. Some of these services also charge a percentage of sales in addition to the monthly fee. This model often has predefined templates that a user can choose from to customize their look and feel. In this model users typically trade less ability to modify or customize the software with the advantage of having the vendor continuously keep the software up to date for security patches as well as adding new features.

See also

Free software


External links

Thanks for visiting my blog, subscribe to my RSS feed. Thanks for visiting!
Getting A New Free Domain (Sở hữu tên miền ngắn cho Blog của bạn)

Meta element

From Wikipedia, the free encyclopedia

(Redirected from Meta tag)
Jump to: navigation, search

Meta elements are HTML elements used to provide structured metadata about a web page. Such elements must be placed as tags in the head section of an HTML document.

Contents

[hide]

Meta element use in search engine optimization

Meta elements provide information about a given webpage, most often to help search engines categorize them correctly. They are inserted into the HTML document, but are often not directly visible to a user visiting the site.

They have been the focus of a field of marketing research known as search engine optimization (SEO), where different methods are explored to provide a user's site with a higher ranking on search engines. In the mid to late 1990s, search engines were reliant on meta data to correctly classify a web page and webmasters quickly learned the commercial significance of having the right meta element, as it frequently led to a high ranking in the search engines — and thus, high traffic to the web site.

As search engine traffic achieved greater significance in online marketing plans, consultants were brought in who were well versed in how search engines perceive a web site. These consultants used a variety of techniques (legitimate and otherwise) to improve ranking for their clients.

Meta elements have significantly less effect on search engine results pages today than they did in the 1990's and their utility has decreased dramatically as search engine robots have become more sophisticated. This is due in part to the nearly infinite re-occurrence (keyword stuffing) of meta elements and/or to attempts by unscrupulous website placement consultants to manipulate (spamdexing) or otherwise circumvent search engine ranking algorithms. While search engine optimization can improve search engine ranking, consumers of such services should be careful to employ only reputable providers.

Major search engine robots are more likely to quantify such factors as the volume of incoming links from related websites, quantity and quality of content, technical precision of source code, spelling, functional v. broken hyperlinks, volume and consistency of searches and/or viewer traffic, time within website, page views, revisits, click-throughs, technical user-features, uniqueness, redundancy, relevance, advertising revenue yield, freshness, geography, language and other intrinsic characteristics.

The keywords attribute

The keywords attribute was popularized by search engines such as Infoseek and AltaVista in 1995, and its popularity quickly grew until it became one of the most commonly used meta elements[1]. By late 1997, however, search engine providers realized that information stored in meta elements, especially the keyword attribute, was often unreliable and misleading, and at worst, used to draw users into spam sites. (Unscrupulous webmasters could easily place false keywords into their meta elements in order to draw people to their site.)

Search engines began dropping support for metadata provided by the meta element in 1998, and by the early 2000s, most search engines had veered completely away from reliance on meta elements, and in July 2002 AltaVista, one of the last major search engines to still offer support, finally stopped considering them[2]. The Director of Research at Google, Monika Henziger, was quoted (in 2002) as saying, "Currently we don't trust metadata"[3].

No consensus exist whether or not the keywords attribute has any impact on ranking at any of the major search engine today. It is being speculated that they do, if the keywords used in the meta can be found in the page copy itself. 37 leaders in search engine optimization concluded in April 2007 that the relevance of having your keywords in the meta attribute keywords is little to none[4].

The description attribute

Unlike the keyword attribute, the description attribute is supported by most major search engines, like Yahoo and Live Search, while Google will fall back on this tag when information about the page itself is requested (e.g. using the related: query). The description attribute provides a concise explanation of a web page's content. This allows the webpage authors to give a more meaningful description for listings than might be displayed if the search engine was to automatically create its own description based on the page content. The description is often, but not always, displayed on search engine results pages, so it can impact click-through rates. Industry commentators have suggested that major search engines also consider keywords located in the description attribute when ranking pages.[5] W3C doesn't specify the size of this description meta tag, but almost all search engines recommend it to be shorter than 200 characters of plain text[citation needed].

The robots attribute

The robots attribute is used to control whether search engine spiders are allowed to index a page, or not, and whether they should follow links from a page, or not. The noindex value prevents a page from being indexed, and nofollow prevents links from being crawled. Other values are available that can influence how a search engine indexes pages, and how those pages appear on the search results. The robots attribute is supported by several major search engines [6]. There are several additional values for the robots meta attribute that are relevant to search engines, such as NOARCHIVE and NOSNIPPET, which are meant to tell search engines what not to do with a web pages content. [7]. Meta tags are not the best option to prevent search engines from indexing content of your website. A more reliable and efficient method is the use of the Robots.txt file (Robots Exclusion Standard).

Additional attibutes for search engines

NOODP

The search engines Google, Yahoo! and MSN use in some cases the title and abstract of the Open Directory Project (ODP) listing of a web site at Dmoz.org for the title and/or description (also called snippet or abstract) in the search engine results pages (SERPS). To give webmasters the option to specify that the ODP content should not be used for listings of their website, Microsoft introduced in May 2006 the new "NOODP" value for the "robots" element of the meta tags [8]. Google followed in July 2006[9] and Yahoo! in October 2006[10].

The syntax is the same for all search engines who support the tag.

Webmasters can decide if they want to disallow the use of their ODP listing on a per search engine basis

Google:

Yahoo!

MSN and Live Search:

NOYDIR

Yahoo! also used next to the ODP listing the content from their own Yahoo! directory but introduced in February 2007 a meta tag that provides webmasters with the option to opt-out of this[11].

Yahoo! Directory titles and abstracts will not be used in search results for their pages if the NOYDIR tag is being added to a web page.

[edit] Robots-NoContent

Yahoo! also introduced in May 2007 the "class=robots-nocontent" tag.[12] This is not a meta tag, but a tag, which can be used throughout a web page where needed. Content of the page where this tag is being used will be ignored by the Yahoo! crawler and not included in the search engine's index.

Examples for the use of the robots-nocontent tag:

excluded content

excluded content

excluded content

Academic studies

Google does not use HTML keyword or metatag elements for indexing. The Director of Research at Google, Monika Henziger, was quoted (in 2002) as saying, "Currently we don't trust metadata" [13]. Other search engines developed techniques to penalize web sites considered to be "cheating the system". For example, a web site repeating the same meta keyword several times may have its ranking decreased by a search engine trying to eliminate this practice, though that is unlikely. It's more likely that a search engine will ignore the meta keyword element completely, and most do regardless of how many words used in the element.

Meta tags use in social bookmarking

In contrast to completely automated systems like search engines, author-supplied metadata can be useful in situations where the page content has been vetted as trustworthy by a reader.

Redirects

Meta refresh elements can be used to instruct a web browser to automatically refresh a web page after a given time interval. It is also possible to specify an alternative URL and use this technique in order to redirect the user to a different location. Using a meta refresh in this way and solely by itself rarely achieves the desired result. For Internet Explorer's security settings, under the miscellaneous category, meta refresh can be turned off by the user, thereby disabling its redirect ability entirely.

Many web design tutorials also point out that client side redirecting tends to interfere with the normal functioning of a web browser's "back" button. After being redirected, clicking the back button will cause the user to go back to the redirect page, which redirects them again. Some modern browsers seem to overcome this problem however, including Safari, Mozilla Firefox and Opera.

HTTP message headers

Meta elements of the form can be used as alternatives to http headers. For example, would tell the browser that the page "expires" on June 21 2006 14:25:27 GMT and that it may safely cache the page until then.

Alternative to meta elements

An alternative to meta elements for enhanced subject access within a web site is the use of a back-of-book-style index for the web site. See examples at the web sites of the Australian Society of Indexers and the American Society of Indexers.

In 1994, ALIWEB, which was likely the first web search engine, also used an index file to provide the type of information commonly found in meta keywords attributes.

See also

References

  1. ^ Statistic (June 4,1997), META attributes by count, Vancouver Webpages, retrieved June 3, 2007
  2. ^ Danny Sullivan (October 1, 2002), Death Of A Meta Tag, SearchEngineWatch.com, retrieved June 3, 2007
  3. ^ Journal of Internet Cataloging, Volume 5(1), 2002
  4. ^ Rand Fishkin (April 2, 2007), Search Engine Ranking Factors V2, SEOmoz.org, retrieved June 3, 2007
  5. ^ Danny Sullivan, How To Use HTML Meta Tags, Search Engine Watch, December 5, 2002
  6. ^ Vanessa Fox, Using the robots meta tag, Official Google Webmaster Central Blog, 3/05/2007
  7. ^ Danny Sullivan (March 5, 2007),Meta Robots Tag 101: Blocking Spiders, Cached Pages & More, SearchEngineLand.com, retrieved June 3, 2007
  8. ^ Betsy Aoki (May 22, 2006), Opting Out of Open Directory Listings for Webmasters, Live Search Blog, retrieved June 3, 2007
  9. ^ Vanessa Fox (July 13, 2006), More control over page snippets, Inside Google Sitemaps, retrieved June 3, 2007
  10. ^ Yahoo! Search (October 24, 2006), Yahoo! Search Weather Update and Support for 'NOODP', Yahoo! Search Blog, retrieved June 3, 2007
  11. ^ Yahoo! Search (February 28, 2007), Yahoo! Search Support for 'NOYDIR' Meta Tags and Weather Update, Yahoo! Search Blog, retrieved June 3, 2007
  12. ^ Yahoo! Search (May 02, 2007), Introducing Robots-Nocontent for Page Sections, Yahoo! Search Blog, retrieved June 3, 2007
  13. ^ Journal of Internet Cataloging, Volume 5(1), 2002

External links

Thanks for visiting my blog, subscribe to my RSS feed. Thanks for visiting!
Getting A New Free Domain (Sở hữu tên miền ngắn cho Blog của bạn)

PageRank

From Wikipedia, the free encyclopedia

Jump to: navigation, search
How PageRank Works
How PageRank Works

PageRank is a link analysis algorithm that assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of "measuring" its relative importance within the set. The algorithm may be applied to any collection of entities with reciprocal quotations and references. The numerical weight that it assigns to any given element E is also called the PageRank of E and denoted by PR(E).

PageRank was developed at Stanford University by Larry Page (hence the name Page-Rank[1]) and later Sergey Brin as part of a research project about a new kind of search engine. The project started in 1995 and led to a functional prototype, named Google, in 1998. Shortly after, Page and Brin founded Google Inc., the company behind the Google search engine. While just one of many factors which determine the ranking of Google search results, PageRank continues to provide the basis for all of Google's web search tools.[2]

The name PageRank is a trademark of Google. The PageRank process has been patented (U.S. Patent 6,285,999 ). The patent is not assigned to Google but to Stanford University.

Contents

[hide]

General description

Google describes PageRank:[2]

PageRank relies on the uniquely democratic nature of the web by using its vast link structure as an indicator of an individual page's value. In essence, Google interprets a link from page A to page B as a vote, by page A, for page B. But, Google looks at more than the sheer volume of votes, or links a page receives; it also analyzes the page that casts the vote. Votes cast by pages that are themselves "important" weigh more heavily and help to make other pages "important".
A graphical representation of a web of links between sites used for PageRank calculations.
A graphical representation of a web of links between sites used for PageRank calculations.

In other words, a PageRank results from a "ballot" among all the other pages on the World Wide Web about how important a page is. A hyperlink to a page counts as a vote of support. The PageRank of a page is defined recursively and depends on the number and PageRank metric of all pages that link to it ("incoming links"). A page that is linked to by many pages with high PageRank receives a high rank itself. If there are no links to a web page there is no support for that page.

Google assigns a numeric weighting from 0-10 for each webpage on the Internet; this PageRank denotes your site’s importance in the eyes of Google. The scale for PageRank is logarithmic like the Richter Scale and roughly based upon quantity of inbound links as well as importance of the page providing the link.

Numerous academic papers concerning PageRank have been published since Page and Brin's original paper.[3] In practice, the PageRank concept has proven to be vulnerable to manipulation, and extensive research has been devoted to identifying falsely inflated PageRank and ways to ignore links from documents with falsely inflated PageRank.

Alternatives to the PageRank algorithm include the HITS algorithm proposed by Jon Kleinberg, the IBM CLEVER project and the TrustRank algorithm.

PageRank algorithm

PageRank is a probability distribution used to represent the likelihood that a person randomly clicking on links will arrive at any particular page. PageRank can be calculated for any-size collection of documents. It is assumed in several research papers that the distribution is evenly divided between all documents in the collection at the beginning of the computational process. The PageRank computations require several passes, called "iterations", through the collection to adjust approximate PageRank values to more closely reflect the theoretical true value.

A probability is expressed as a numeric value between 0 and 1. A 0.5 probability is commonly expressed as a "50% chance" of something happening. Hence, a PageRank of 0.5 means there is a 50% chance that a person clicking on a random link will be directed to the document with the 0.5 PageRank.

Simplified PageRank algorithm

Assume a small universe of four web pages: A, B, C and D. The initial approximation of PageRank would be evenly divided between these four documents. Hence, each document would begin with an estimated PageRank of 0.25.

If pages B, C, and D each only link to A, they would each confer 0.25 PageRank to A. All PageRank PR( ) in this simplistic system would thus gather to A because all links would be pointing to A.

PR(A)= PR(B) + PR(C) + PR(D).\,

But then suppose page B also has a link to page C, and page D has links to all three pages. The value of the link-votes is divided among all the outbound links on a page. Thus, page B gives a vote worth 0.125 to page A and a vote worth 0.125 to page C. Only one third of D's PageRank is counted for A's PageRank (approximately 0.083).

PR(A)= \frac{PR(B)}{2}+ \frac{PR(C)}{1}+ \frac{PR(D)}{3}.\,

In other words, the PageRank conferred by an outbound link L( ) is equal to the document's own PageRank score divided by the normalized number of outbound links (it is assumed that links to specific URLs only count once per document).

PR(A)= \frac{PR(B)}{L(B)}+ \frac{PR(C)}{L(C)}+ \frac{PR(D)}{L(D)}. \,

In the general case, the PageRank value for any page u can be expressed as:

PR(u) = \sum_{v \in B_u} \frac{PR(v)}{N_v},

i.e. the PageRank value for a page u is dependent on the PageRank values for each page v out of the set Bu (this set contains all pages linking to page u), divided by the number of links from page v (this is Nv).

PageRank algorithm including damping factor

The PageRank theory holds that even an imaginary surfer who is randomly clicking on links will eventually stop clicking. The probability, at any step, that the person will continue is a damping factor d. Various studies have tested different damping factors, but it is generally assumed that the damping factor will be set around 0.85.[4]

The damping factor is subtracted from 1 (and in some variations of the algorithm, the result is divided by the number of documents in the collection) and this term is then added to the product of (the damping factor and the sum of the incoming PageRank scores).

That is,

PR(A)= 1 - d + d \left( \frac{PR(B)}{L(B)}+ \frac{PR(C)}{L(C)}+ \frac{PR(D)}{L(D)}+\,\cdots \right)

or (N = the number of documents in collection)

PR(A)= {1 - d \over N} + d \left( \frac{PR(B)}{L(B)}+ \frac{PR(C)}{L(C)}+ \frac{PR(D)}{L(D)}+\,\cdots \right) .

So any page's PageRank is derived in large part from the PageRanks of other pages. The damping factor adjusts the derived value downward. The second formula above supports the original statement in Page and Brin's paper that "the sum of all PageRanks is one".[3] Unfortunately, however, Page and Brin gave the first formula, which has led to some confusion.

Google recalculates PageRank scores each time it crawls the Web and rebuilds its index. As Google increases the number of documents in its collection, the initial approximation of PageRank decreases for all documents.

The formula uses a model of a random surfer who gets bored after several clicks and switches to a random page. The PageRank value of a page reflects the chance that the random surfer will land on that page by clicking on a link. It can be understood as a Markov chain in which the states are pages, and the transitions are all equally probable and are the links between pages.

If a page has no links to other pages, it becomes a sink and therefore terminates the random surfing process. However, the solution is quite simple. If the random surfer arrives at a sink page, it picks another URL at random and continues surfing again.

When calculating PageRank, pages with no outbound links are assumed to link out to all other pages in the collection. Their PageRank scores are therefore divided evenly among all other pages. In other words, to be fair with pages that are not sinks, these random transitions are added to all nodes in the Web, with a residual probability of usually d = 0.85, estimated from the frequency that an average surfer uses his or her browser's bookmark feature.

So, the equation is as follows:

PR(p_i) = \frac{1-d}{N} + d \sum_{p_j \in M(p_i)} \frac{PR (p_j)}{L(p_j)}

where p1,p2,...,pN are the pages under consideration, M(pi) is the set of pages that link to pi, L(pj) is the number of outbound links on page pj, and N is the total number of pages.

The PageRank values are the entries of the dominant eigenvector of the modified adjacency matrix. This makes PageRank a particularly elegant metric: the eigenvector is

\mathbf{R} = \begin{bmatrix} PR(p_1) \\ PR(p_2) \\ \vdots \\ PR(p_N) \end{bmatrix}

where R is the solution of the equation

\mathbf{R} =  \begin{bmatrix} {(1-d)/ N} \\ {(1-d) / N} \\ \vdots \\ {(1-d) / N} \end{bmatrix}  + d  \begin{bmatrix} \ell(p_1,p_1) & \ell(p_1,p_2) & \cdots & \ell(p_1,p_N) \\ \ell(p_2,p_1) & \ddots &  & \vdots \\ \vdots & & \ell(p_i,p_j) & \\ \ell(p_N,p_1) & \cdots & & \ell(p_N,p_N) \end{bmatrix}  \mathbf{R}

where the adjacency function \ell(p_i,p_j) is 0 if page pj does not link to pi, and normalised such that, for each j

\sum_{i = 1}^N \ell(p_i,p_j) = 1,

i.e. the elements of each column sum up to 1.

This is a variant of the eigenvector centrality measure used commonly in network analysis.

The values of the PageRank eigenvector are fast to approximate (only a few iterations are needed) and in practice it gives good results.

As a result of Markov theory, it can be shown that the PageRank of a page is the probability of being at that page after lots of clicks. This happens to equal t − 1 where t is the expectation of the number of clicks (or random jumps) required to get from the page back to itself.

The main disadvantage is that it favors older pages, because a new page, even a very good one, will not have many links unless it is part of an existing site (a site being a densely connected set of pages, such as Wikipedia). The Google Directory (itself a derivative of the Open Directory Project) allows users to see results sorted by PageRank within categories. The Google Directory is the only service offered by Google where PageRank directly determines display order. In Google's other search services (such as its primary Web search) PageRank is used to weight the relevance scores of pages shown in search results.

Several strategies have been proposed to accelerate the computation of PageRank.[5]

Various strategies to manipulate PageRank have been employed in concerted efforts to improve search results rankings and monetize advertising links. These strategies have severely impacted the reliability of the PageRank concept, which seeks to determine which documents are actually highly valued by the Web community.

Google is known to actively penalize link farms and other schemes designed to artificially inflate PageRank. How Google identifies link farms and other PageRank manipulation tools are among Google's trade secrets.

PageRank variations

Google Toolbar

An example of the PageRank indicator as found on the Google toolbar.
An example of the PageRank indicator as found on the Google toolbar.

The Google Toolbar's PageRank feature displays a visited page's PageRank as a whole number between 0 and 10. The most popular websites have a PageRank of 10. The least have a PageRank of 0. Google has not disclosed the precise method for determining a Toolbar PageRank value. Google representative Matt Cutts has publicly indicated that the Toolbar PageRank values are republished about once every three months, indicating that the Toolbar PageRank values are historical rather than real-time values.[6]

Google directory PageRank

The Google Directory PageRank is an 8-unit measurement. These values can be viewed in the Google Directory. Unlike the Google Toolbar which shows the PageRank value by a mouseover of the greenbar, the Google Directory does not show the PageRank as a numeric value but only as a green bar.

False or spoofed PageRank

While the PR shown in the Toolbar is considered to be derived from an accurate PageRank value (at some time prior to the time of publication by Google) for most sites, it must be noted that this value is also easily manipulated. A current flaw is that any low PageRank page that is redirected, via a 302 server header or a "Refresh" meta tag, to a high PR page causes the lower PR page to acquire the PR of the destination page. In theory a new, PR0 page with no incoming links can be redirected to the Google home page - which is a PR 10 - and by the next PageRank update the PR of the new page will be upgraded to a PR10. This spoofing technique, also known as 302 Google Jacking, is a known failing or bug in the system. Any page's PR can be spoofed to a higher or lower number of the webmaster's choice and only Google has access to the real PR of the page. Spoofing is generally detected by running a Google search for a URL with questionable PR, as the results will display the URL of an entirely different site (the one redirected to) in its results.

Manipulating PageRank

For search-engine optimization purposes, some companies, such as Text Link Brokers, offer to sell high PageRank links to webmasters.[7] As links from higher-PR pages are believed to be more valuable, they tend to be more expensive. It can be an effective and viable marketing strategy to buy link advertisements on content pages of quality and relevant sites to drive traffic and increase a webmaster's link popularity. However, Google has publicly warned webmasters that if they are or were discovered to be selling links for the purpose of conferring PageRank and reputation, their links will be devalued (ignored in the calculation of other pages' PageRanks). The practice of buying and selling links is intensely debated across the Webmastering community. Google advises webmasters to use the nofollow HTML attribute value on sponsored links. According to Matt Cutts, Google is concerned about webmasters who try to game the system, and thereby reduce the quality of Google search results.[7]

Other uses of PageRank

A version of PageRank has recently been proposed as a replacement for the traditional ISI impact factor,[8] and implemented at eigenfactor.org. Instead of merely counting total citation to a journal, the "importance" of each citation is determined in a PageRank fashion.

PageRank has also been used to automatically rank WordNet synsets according to how strongly they possess a given semantic property, such as positivity or negativity.[9]

A similar new use of PageRank is to rank academic doctoral programs based on their records of placing their graduates in faculty positions. In PageRank terms, academic departments link to each other by hiring their faculty from each other (and from themselves).[10]

A dynamic weighting method similar to PageRank has been used to generate customized reading lists based on the link structure of Wikipedia.[11]

A Web crawler may use PageRank as one of a number of importance metrics it uses to determine which URL to visit next during a crawl of the web. One of the early working papers[12] which were used in the creation of Google is Efficient crawling through URL ordering,[13] which discusses the use of a number of different importance metrics to determine how deeply, and how much of a site Google will crawl. PageRank is presented as one of a number of these importance metrics, though there are others listed such as the number of inbound and outbound links for a URL, and the distance from the root directory on a site to the URL.

Google's "rel='nofollow'" proposal

In early 2005, Google implemented a new value, "nofollow", for the rel attribute of HTML link and anchor elements, so that website builders and bloggers can make links that Google will not consider for the purposes of PageRank — they are links that no longer constitute a "vote" in the PageRank system. The nofollow relationship was added in an attempt to help combat spamdexing.

As an example, people could create many message-board posts with links to their website to artificially inflate their PageRank. Now, however, the message-board administrator can modify the code to automatically insert "rel='nofollow'" to all hyperlinks in posts, thus preventing PageRank from being affected by those particular posts.

This method of avoidance, however, also has various drawbacks, such as reducing the link value of actual comments. (See: Spam in blogs#rel="nofollow")

See also

References

  1. ^ David Vise and Mark Malseed (2005). The Google Story, 37. ISBN ISBN 0-553-80457-X.
  2. ^ a b Google Technology. [1]
  3. ^ a b The Anatomy of a Large-Scale Hypertextual Web Search Engine. Brin, S.; Page, L (1998).
  4. ^ Sergey Brin and Lawrence Page (1998). "The anatomy of a large-scale hypertextual Web search engine". Proceedings of the seventh international conference on World Wide Web 7: 107-117 (Section 2.1.1 Description of PageRank Calculation).
  5. ^ Fast PageRank Computation via a Sparse Linear System (Extended Abstract). Gianna M. Del Corso, Antonio Gullí, Francesco Romani.
  6. ^ Cutt, Matts. What’s an update? Blog post (September 8, 2005)
  7. ^ a b How to report paid links. mattcutts.com/blog (April 14, 2007). Retrieved on 2007-05-28.
  8. ^ Johan Bollen, Marko A. Rodriguez, and Herbert Van de Sompel. (December 2006). "Journal Status". Scientometrics 69 (3).
  9. ^ Andrea Esuli and Fabrizio Sebastiani. PageRanking WordNet synsets: An Application to Opinion-Related Properties. In Proceedings of the 35th Meeting of the Association for Computational Linguistics, Prague, CZ, 2007, pp. 424-431. Retrieved on June 30, 2007.
  10. ^ Benjamin M. Schmidt and Matthew M. Chingos (2007). "Ranking Doctoral Programs by Placement: A New Method". PS: Political Science and Politics 40 (July): 523-529.
  11. ^ Wissner-Gross, A. D. (2006). "Preparation of topical readings lists from the link structure of Wikipedia". Proceedings of the IEEE International Conference on Advanced Learning Technology.
  12. ^ Working Papers Concerning the Creation of Google. Google. Retrieved on November 29, 2006.
  13. ^ Cho, J., Garcia-Molina, H., and Page, L. (1998). "Efficient crawling through URL ordering". Proceedings of the seventh conference on World Wide Web.

Further reading

External links

Thanks for visiting my blog, subscribe to my RSS feed. Thanks for visiting!

Followers