-2

We have a few API's on our site, one is a places API. When google spider crawls our site it will hit the quota for our places API.

I have reset the API over and over and its getting very tiring!

I also set my site to run 3 different API projects with the same APIs (google places) and used logic to make it use up one, switch to the next ect ect however; Even after now having 450,000 calls per day, by noon google search spider has killed all 3 API's!!!

this now makes it so that my users can no longer use any section that uses the places API, this is a HUGE problem!!! i am not being charged for the google hitting google API calls, however it is destroying the users experience at my site and will not be tolerated!

Please help right away!

I imagine it rests within googles hands to fix this bug within their system, there is really nothing i can personally do as you have read above that i have done everything i can for my users experience when visiting my site.

2 Answers2

1

It's not a bug in their system, it's a bug in your site if you have hundreds of thousands of unique URLs that all make API calls and you haven't prevented crawling them using robots.txt (see here).

Community
  • 1
  • 1
Chris
  • 5,571
  • 2
  • 20
  • 32
  • It is a bug, when crawl hits any other google apis it doesn't not count, it only happen with places api – Kelly Martin Mar 28 '16 at 06:08
  • And I'm also not looking to not index these available pages either, I do what them seen – Kelly Martin Mar 28 '16 at 06:10
  • 1
    @KellyMartin If a Google web crawler (or any other web crawler for that matter) crawls a page that makes an API request then that will count against your API quota. That is the intended behavior. Google have no way of knowing if your web app is making an API request on behalf of a bot or a real user. – Chris Mar 28 '16 at 08:40
  • It's also apparent you've already been told the same answer on the [Google Product Forum](https://productforums.google.com/forum/#!topic/webmasters/b-xOBJlyrs4). – Chris Mar 28 '16 at 09:00
  • as i have said before, even google over the phone agreed that it should not make a change in quota, it also should not charge a person, this is a real issue, any other API acts properly, crawl will hit it, it will work, but the crawl will not charge or change quota. as for the places API it only half way works, it will not charge the user, but it still effects quotas – Kelly Martin Mar 28 '16 at 16:05
  • @KellyMartin It doesn't even sound like you have a paid account so you shouldn't be being charged for any requests. You say you've got 450,000 calls per day with 3 API keys. The limit for free API keys is 150,000 requests so clearly you are only using the free API anyway. There is no way for Google to tell if a request to your website is automated or not so any API requests you make will always count towards your quota regardless of whether a human or a bot initiated them. – Chris Mar 28 '16 at 16:31
  • I'm on a billing profile (not a free, free is 1,000, once on billing you can do 150,000) and i get charged when i should be getting charged – Kelly Martin Mar 28 '16 at 17:40
  • @KellyMartin No, 1,000 is the limit until you verify your identity. After you have proven your identity by providing a credit card (which **is not charged**) the limit is raised to 150,000. The API documentation says so [here](https://developers.google.com/places/web-service/usage). – Chris Mar 28 '16 at 17:45
0

I ended up solves this in a work around way, for anyone else having this issue here is what i did.

1) i have 3 API projects set up, each can make 150,000 calls a day

2) i have logic set up to look and see if the page is being accessed from a sprider like google bot

3) if the session is coming from a spider, the 3rd API key is set to be null

4)the system trys to use each API one by one, if the first result set is empty it tries number 2, then if 2 is empty it tries number 3

5) because the 3rd API key is set to null if a spider, this allows for those 150,000 calls to be set aside for a user, but now we have to stop crawl from crawling blank content

6) in the logic block that switches from trying API 1, then 2, then 3 I made php rewrite my robots.txt file, if API 1 is usable i set this:

file_put_contents('robots.txt', 'User-agent: * Disallow: ');

same for API 2, if API 3 is being used then i rewrite the robots.txt to become:

file_put_contents('robots.txt', 'User-agent: * Disallow: /');

this now has set aside 150,000 calls for users, the spiders can not use these 150,000 calls, and at the point that the other 300,000 calls have been exhausted, the site can no longer be crawled for the rest of the day by any spiders.

problem solved! told you id fix it myself if they couldnt.

oh and another note, because its google using google API's im not being charged for the 300,000 calls that google kills, i only get charged for that real users use up......pure perfection!