I'm attempting to write a web scraper here and the website is returning a 403 forbidden to my code even though it is an accessible webpage through a browser. My main question is: is this something that they set up on the website to discourage web scraping or am I doing something wrong?
import java.net.*;
import java.io.*;
public class Main {
public static void main(String[] args) throws Exception {
URL oracle = new URL("http://www.pcgs.com/prices/");
BufferedReader in = new BufferedReader(
new InputStreamReader(oracle.openStream()));
String inputLine;
while ((inputLine = in.readLine()) != null)
System.out.println(inputLine);
in.close();
}
}
If I change the url to a website like http://www.google.com then it will return html. If the site is blocking it is there a way around that? Thanks for the help