Trying to web crawl a list of url, and store their information locally.
Had to use the encoded the url and use it as file name for identificaiton, but noticed Puppeteer is failing when the path is too long.
const fileNamePrefix = currTimestamp + '-' + Buffer.from(validMainURL).toString('base64');
await puppeteer.page.screenshot({ path: outputDirectory.concat('/', fileNamePrefix, '-', 'index.png') });
[Error: ENOENT: no such file or directory, open 'C:\repos\Puppeteer\output\0-aHR0cHM6Ly93d3cudTJ1ZS5jb20vdGF4L2hvbWUucGhwP2VtNWhwdDRucjZldzh5dnZzdTBpZmt
xdGFuMTE5emxjZDc4c25xYnU0dTFlOHpiN3Nlb21pZ2x0eTc3c2p4cmQ3OXpza3FibXRyMGY5bnZmeGRycG4yMGZ4cDMwY3J0MGZpNGkwaG53dXF5cWtmbHRsaXBqd2c2YWN1cGNxbDZja2ZiOTd4YmJuYmdobmRkZnpweGp3bWg
yb2lsY2ZtZW83ZmR0NGd1dWR1dm1tbnMwMWhhc2JvY2VheXNuMndkZGRlcWJjNmF5-index.png'] {
errno: -4058,
code: 'ENOENT',
syscall: 'open',
path: 'C:\\repos\\Puppeteer\\output\\0-aHR0cHM6Ly93d3cudTJ1ZS5jb20vdGF4L2hvbWUucGhwP2VtNWhwdDRucjZldzh5dnZzdTBpZmtxdGFuMTE5emxjZDc4c25xYnU0dTFlOHpiN
3Nlb21pZ2x0eTc3c2p4cmQ3OXpza3FibXRyMGY5bnZmeGRycG4yMGZ4cDMwY3J0MGZpNGkwaG53dXF5cWtmbHRsaXBqd2c2YWN1cGNxbDZja2ZiOTd4YmJuYmdobmRkZnpweGp3bWgyb2lsY2ZtZW83ZmR0NGd1dWR1dm1tbnMwM
Whhc2JvY2VheXNuMndkZGRlcWJjNmF5-index.png'
}
There are any way we can override it to support longer file path?