Questions tagged [webscarab]

OWASP WebScarab: a framework for analysing applications that communicate using the HTTP and HTTPS protocols

WebScarab is a framework for analysing applications that communicate using the HTTP and HTTPS protocols. It is written in Java, and is thus portable to many platforms. WebScarab has several modes of operation, implemented by a number of plugins. In its most common usage, WebScarab operates as an intercepting proxy, allowing the operator to review and modify requests created by the browser before they are sent to the server, and to review and modify responses returned from the server before they are received by the browser. WebScarab is able to intercept both HTTP and HTTPS communication. The operator can also review the conversations (requests and responses) that have passed through WebScarab.

You may also be interested in testing the Next Generation of WebScarab (https://www.owasp.org/index.php/OWASP_WebScarab_NG_Project)

30 questions
2
votes
1 answer

Webscarab : unable to view https sites

I am running webscarab from the jar % java -jar WebScarab-ng-0.2.1.one-jar.jar For normal websites (http) i am able to analyze the packets using webscarab. But if i enter any secure site (https), say https://www.gmail.com i am unable to view the…
Anirudhan J
  • 2,072
  • 6
  • 27
  • 45
2
votes
2 answers

BeautifulSoup children of ordered list
    , no results

I'm using BeautifulSoup to parse code from Craigslist. But when I'm using find_all command, I'm getting an empty list as output. If anyone could point out where I'm making a mistake or show me a better solution I would be grateful! from selenium…
Fadi
  • 21
  • 2
2
votes
1 answer

DataLayer with python

is it possible to extract value from data layer? This is url =…
2
votes
1 answer

How to extract data from HtmlTable in C# and arrange in a row?

I want to extract data from HTMLTable row by row. But I'm facing problems in separating columns in the rows. The code I'm using below gives me each cell in a single line. But I want each row in 1 line then another. how can I do that? HtmlNode table…
Awais Shah
  • 31
  • 4
2
votes
0 answers

How to Scrape Shopify Website Using Beautiful Soup and Get All the Tags(#)

I am trying to find all the # elements in a particular webpage by using Beautiful Soup. import requests from bs4 import BeautifulSoup as Soup source = "https://www.runinrabbit.com/" def getPageContents(source): req = requests.get(source) …
Atul Anand
  • 29
  • 6
2
votes
2 answers

What is the way of clicking on links which could not be scrolled into view by Selenium + Python?

This is piece if my code: x=driver.find_element_by_xpath("""//*[@id="react-root"]/section/main/article/div[1]/div/div/div[1]/div[2]/a""") x.click() But, this error are occurred: selenium.common.exceptions.ElementNotInteractableException: Message:…
Hamed Baziyad
  • 1,954
  • 5
  • 27
  • 40
2
votes
1 answer

Is it possible to connect localhost applications in webscarab?

When I try to connect the locally hosted application like: localhost/myapplications.php to webscarab means apache tomcat reports an alert message as 404 not found. But on giving the live "URL" as http://www.myapps.com gets synchronised with the…
1
vote
4 answers

Git and WebScarab installation

I need to install WebScarab but I do not know what does git and how can I instal it with git. can anyone explain me, how to install latest WebScarab from Git. Link: http://dawes.za.net/gitweb.cgi
user873286
  • 7,799
  • 7
  • 30
  • 38
1
vote
1 answer

selenium doesn't want the website to get to the 3rd page

Whenever I want selenium to press enter for me, it doesn't want to, get to the next page. Is something wrong with the code? from selenium import webdriver from selenium.webdriver.common import keys from selenium.webdriver.common.keys import…
1
vote
1 answer

Facing issue to extract youtube page source using Jsoup

Using Jsoup, I am able to extract the most websites page source code (right click on webpage and choose "View Page Source"). But for any youtube video page, I am unable to extract page source Its not giving proper page source code. Tried the…
Funny Boss
  • 328
  • 1
  • 3
  • 12
1
vote
0 answers

Crawl url, search and save results - no specific address

The page I am trying to crawl is a directory of chiropractors. I set my search criteria and get search result but the address stays the same. It has no element of search, page number or anything so I am not able to crawl it the usual way. Is there…
fatima
  • 61
  • 5
0
votes
0 answers

Unable to read document

I can't seem to figure out how to get this script to read the actual documents within the links it pulls. I can't seem to get it to bring back in text from the actual document within the links. I also tried to use iframe and src but was…
mason
  • 1
  • 2
0
votes
1 answer

Fetch image address using python request, beautifulsoup

I tried to fetch some pictures from a website using Python but it is not easy to me. Here is what I did below: import requests from bs4 import BeautifulSoup import re url = "https://www.mayde.com/ponytail" res =…
0
votes
1 answer

Find() takes no keyword arguments @ web scraping

please help me find the error as i didn’t understand for correctly : from bs4 import BeautifulSoup import requests import pandas as pd url = 'https://www.imdb.com/chart/top/?ref_=nv_mv_250' response = requests.get(url) with…
Dodger
  • 1
  • 2
0
votes
0 answers

how to categorize the out put file of the python scraping website with selenium

this is my code and they are right. But I get the data from tables in blend way which is not in order. May anybody can categorize them? from selenium import webdriver import time url =…
Saeid Vaygani
  • 179
  • 1
  • 1
  • 8
1
2