How to Remove Pages from Google - but keep your site private

I have a website that I use for stuff but not stuff on the public web. I use it for serving my calendars and private web applications. I use Apache's built in authentication to keep it from being crawled and to keep casual visitors from wandering in. I have a domain name assigned to it from dyndns.org for convenience. The ddclient script runs on one of my boxes and updates the ip address over there whenever mine changes. The system works very well. Most of the time. Somehow one weekend the domain name was left pointing at my old ip address for a while when I was out of town. Who ever had that IP address sure was serving up a lot of nasty stuff. Now Google thinks all that nasty stuff is on my private domain.

I'm going to fix it. I use Google Webmaster Tools for other stuff and I see there's a URL removal tool in there. To use the tool you have to verify that you own the domain - a reasonable request. The thing is the URLs I want to remove are on a domain that I don't want Google to crawl and the way Google verifies that you own the domain is by retrieving a specific URL from the domain. What a dilemma.

Luckily Apache access control and authentication are quite flexible and can deal with this handily. My example uses the Apache httpd.conf file but the important directives that I use are available in .htaccess as well. What I really want to do is allow someone to have access to one file and one file only on my website. So my directory section used to look like this:

  <Directory "/home/rob/www/example/htdocs">
    AllowOverride All
    AuthType Basic
    AuthName "private stuff"
    AuthUserFile /etc/apache2/user.passwd
    Require valid-user
    Order deny,allow
    Deny from all
    Allow from 127.0.0
    Allow from 192.168.1
    Satisfy Any
  </Directory>

What this section says is to use the Basic authentication method to allow users listed in user.passwd to get at URLs in htdocs. Then it says deny access to anyone except for people on the 127.0.0 or 192.168.1 subnets (localhost and the LAN). Since there are two ways to figure out who gets in - either user name with password or IP address - the Satisfy directive says that either of the two methods are acceptable. Satisfy All would mean that users had to pass both tests (be on the LAN and have a valid username & password).

Google needs to get in too now. But just a little bit. I added a Files directive inside that Directory to provide an exception for their server.

  <Directory "/home/rob/www/example/htdocs">
    AllowOverride All
    AuthType Basic
    AuthName "private stuff"
    AuthUserFile /etc/apache2/user.passwd
    Require valid-user
    Order deny,allow
    Deny from all
    Allow from 127.0.0
    Allow from 192.168.1
    Satisfy Any
    <Files google1234567890abcdef.html>
      Order allow,deny
      Allow from all
    </Files>
    <Files noexist_1234567890abcdef.html>
      Order allow,deny
      Allow from all
    </Files>
    <Files robots.txt>
      Order allow,deny
      Allow from all
    </Files>
  </Directory>

The second Files directive is there because Google has to establish that the server doesn't return HTTP code 20x for any old URL. The third one is there because I discovered that I'll need a robots.txt as well.

So they check for a file that should exist and one that shouldn't. The one that should exist is one they asked you to create called google1234567890abcdef.html. They assume that noexist_1234567890abcdef.html should not exist and expect to get a 404 for that. I allow access to both but of course I only created google1234567890abcdef.html (just an empty text file) so noexist_1234567890abcdef.html will 404. The access logs for my server showed me what to look for:

rob@copper:~/www> tail logs/example-*
...
74.125.75.1 - - [26/Sep/2008:01:19:29 -0400] "GET /google1234567890abcdef.html HTTP/1.1" 200 - "-" "Google-Sitemaps/1.0"
74.125.75.1 - - [26/Sep/2008:01:19:29 -0400] "GET /noexist_1234567890abcdef.html HTTP/1.1" 401 1266 "-" "Google-Sitemaps/1.0"
...

At the time of that request I didn't have the second Files directive. The 401 response to the second request confounds Google and it gave me the error:

Last attempt Sep 25, 2008: Our system has experienced a temporary problem.

After adding the second Files directive (and restarting apache) I told Google to look again and it worked just fine. For bonus points, I could have used the IP address Google came in from (74.125.75.1) to restrict access more, as in

    <Files google1234567890abcdef.html>
      Order allow,deny
      Allow from 74.125.75
    </Files>
    <Files noexist_1234567890abcdef.html>
      Order allow,deny
      Allow from 74.125.75
    </Files>

But that seems like overkill to me.

Google Webmaster Tools has the removal tool at Tools >> Remove URLs. There it states that:

To block a page or image from your site, do one of the following, and then submit your removal request:

  • Make sure the content is no longer live on the web. Requests for the page or image you want to remove must return an HTTP 404 (not found) or 410 status code.
  • Block the content using a meta noindex tag.
  • Block the content using a robots.txt file.

To remove your entire site, or a complete directory, use a robots.txt file

Okay, the way I read that it sounds like I need a robots.txt to get this done quick. That's because my site would give a 401 Denied instead of 404 Not Found or 410 Gone for these URLs. Sigh.

I guess there's a good side-effect. If I goof up in the future and leave a link out there that let in a bot that respects robots.txt it would still skip anything on my site, so I'll make one. This is why there's a Files directive for robots.txt in the code above (I'm writing this as I do it). The contents of robots.txt are simple and should block crawling by all robots if they honour it:

User-Agent: *
Disallow: /

After that I went into the URL Removal Tool and picked "Entire Site". Even though all that's in there is a bunch of junk I don't want associated with me, it still feels scary to choose that option. The request is pending now and I expect that there shouldn't be a problem getting it removed.

Oh, and earlier I said this would work with just .htaccess. From Apache's documentation it looks like you'd just put a .htaccess in the appropriate physical directory and use the File directives in the same way but without the Directory part around it.

3.058825
Your rating: None Average: 3.1 (17 votes)