The key to getting links to your site is to create unique, compelling content that other people want to link to. Google’s very good at detecting unnatural links that violate our Webmaster Guidelines (for example, those that come from link-exchange schemes, paid links schemes, or are auto-generated), so participating in such schemes could end up doing more harm than good. Check out this video for more ideas:
The web is an organic, constantly shifting ecosystem, and your site’s performance in search can change for many reasons. Maybe Google is having problems crawling your site, or maybe there’s a problem with your content. To start troubleshooting, check out My site isn’t doing well in search.
Personalized annotations next to your page in search results may increase your site’s visibility and make your site’s snippet more compelling, which may in turn increase the odds that users will click through to your page. We're working on ways to provide webmasters with more data about the impact of +1 on their site's performance in search—stay tuned for updates!
Google's goal is to return the best and most relevant results to the user, regardless of the top-level domain. If our system determines that the best result is a page on a new gTLD, we'll return that page in search results.
Content recommended by friends and acquaintances is often more relevant than content from strangers. For example, a movie review from an expert is useful, but a movie review from a friend who shares your tastes can be even better. Because of this, +1’s from friends and contacts can be a useful signal to Google when determining the relevance of your page to a user’s query. This is just one of many signals Google may use to determine a page’s relevance and ranking, and we’re constantly tweaking and improving our algorithm to improve overall search quality. For +1's, as with any new ranking signal, we'll be starting carefully and learning how those signals affect search quality.
We work really hard to make sure competitors can’t negatively affect other sites’ rankings. If you're concerned about another site linking to yours, we suggest contacting the webmaster of the site in question. Google aggregates and organizes information published on the web; we don't control the content of these pages.
Our goal is to provide users with the best and most relevant results for their query. Sometimes webmasters—accidentally or on purpose—use techniques that attempt to game the system. For example, a site might include hidden text or links, or use cloaking or sneaky redirects. The quality guidelines section of our webmaster guidelines outlines some of the illicit practices which can lead to a penalty.
Submit a reconsideration request. Please allow several weeks for the reconsideration process. Unfortunately, we can't reply individually to reconsideration requests at this time. If it's been several weeks since you submitted your reconsideration request, and you haven't seen any changes in your site's performance, this probably means that your site is still in violation of the Webmaster Guidelines, or it simply is not ranking as well as you'd like.
In general, you can improve your images’ performance in search by giving Google as much information about your images as possible. You can do this by providing detailed, informative filenames and alt text, or by submitting an Image Sitemap.
You can help us find and index your video content by creating and submitting a Video Sitemap. This tells Google about videos on your site that we might not otherwise discover, and lets you provide useful information, such as the title or running time of the video.
The first line of any search result is the title of the web page. This text is generally taken from the contents of the <title> tag for that page (which is also the text that appears in the title bar of your browser). Occasionally (generally when the title tag is not meaningful or the page is not crawlable) Google will pull the title from the anchor text of a link to that page, or from the Open Directory Project.
Your page title gives Google additional information about the content of the page. Relevant, useful titles also help users decide which site to click in the results page.
Google automatically attempts to extract the part of the page that's most relevant to the user’s query. You can use the meta tag to prevent Google from showing snippets, or you can help us pick good snippets by writing useful meta descriptions.
The +1 button shows up for signed-in users of google.com in English who use a modern desktop browser. If you’re not signed in, you may still see +1 buttons, but if you click a button, you’ll be asked to sign in.
Some sites, especially large-scale review sites, use RDFa, microdata, or microformats to identify structured information—such as reviews, product data, or contact information—in their content. This can be very helpful to users, but it does require you to mark up your site’s content in a very structured way.
Google links to the current version of a page, and also stores a copy of a recent version of that page so you can see what it looked like recently, or view the stored ("cached") copy if the current page is not available. You can also view a text-only version of your cached page. Because search engines crawl mainly text, this is a great way to see how your page appears to Google. (For example, if important content isn't visible in the text-only version of the page, it may be because it's embedded in an image or otherwise unavailable to search engines.)
Sitelinks are algorithmically generated links intended to help users navigate quickly to the relevant parts of your site. Not all sites have sitelinks, and they are not always available. We only show sitelinks for results when we think they'll be useful to the user.
If the structure of your site doesn't allow our algorithms to find good sitelinks, or if sitelinks for your site aren't relevant to the user's query, they won't be displayed. There are, however, best practices you can follow to improve the quality of your sitelinks. For example, make sure your site's internal links have alt text and anchor text that's informative, compact, and avoids repetition.
If Google detect that your query is location based (for example, [seattle pizza]) we'll display a map with relevant locations. In addition, we may also display relevant images alongside the map (for example, we might show a picture of the Space Needle in the results for Seattle).
If you have a business or other location that is not showing up on a map in search results, make sure that it's been added to Google Places.
Google doesn't control the content of the web, so before we remove a page from our search results, the owner has to change it or take it down. If that's you, just make the changes you want. We'll see them the next time we crawl your site, and we'll update our index.
After you've made the changes, you can expedite the removal process by submitting a URL removal request. If you don't own the site, and the webmaster won't take the content down, you can still request removal of certain confidential or personal information, such as your government ID number, bank account number, or signature.
If a page has changed and you urgently need to expedite the removal of outdated information, you can file a removal request and select the "Remove page from cache only" option. If you don't want Google to ever display a Cached link for a page, use the noarchive meta tag.
If you find that another site is duplicating your content by scraping (misappropriating and republishing) it, it's unlikely that this will negatively impact your site's ranking in Google search results pages. If you do spot a case that's particularly frustrating, you are welcome to file a DMCA request to claim ownership of the content and request removal of the other site from Google's index.
If your content is private, you must use server side authentication (password-protection) to block access to it. Don't rely on robots.txt or meta/header tags to keep private content from becoming public. This video gives some recommendations for various ways to prevent Google and other search engines from crawling private content.
You can also use a noindex meta tag to tell search engines not to index a certain page. In this case, make sure that that page is not disallowed in your robots.txt file. If we're not allowed to crawl the page, we won't be able to see and obey the meta tag on it.
You can also use the X-Robots-Tag directive, which adds Robots Exclusion Protocol (REP) meta tag support for non-HTML pages. This directive gives you the same control over your videos, spreadsheets, and other indexed file types.
Next to the denial you should see a icon or a link to learn more. This will provide details about why your specific removal was denied.
If you requested removal of a directory or an entire site, you have to have blocked that content with your robots.txt file in order for your request to be successful. (Returning 404s isn't enough.)
If you were requesting a cache removal, make sure you entered the URL of the page (e.g. http://www.example.com) and not the URL of the Google cache (e.g. http://209.85.273.132/search?q=cache:VE9bONM_7xsJ:www.example.com).
Make sure your URL met the requirements for removal. If you're still stumped, post a new question in our forum with the details, including what you were trying to remove and what the denial reason says.