0

I want to test whether some content does not contain any HTML. What is a simple and clean way to do so?

page.find(".description").should_not have_content /\<.*\>/

Does not work properly, since it fails on &lt;strong&gt;Lorem but passes on <strong>Lorem. Probably due to the way capybara helps its user with escaping HTML.

Solving with xpaths works, but leaves me wondering if there is not a much simpler solution.

page.should_not have_selector(:xpath, "//div[@class="description"]/*")

Is there a built-in way to detect wether some text has been stripped of HTML in Capybara?

berkes
  • 26,996
  • 27
  • 115
  • 206

1 Answers1

0

Capybara's has_content? (and thus have_content) method aims to inspect text rendered to the user, not text in html source of an element as you expect.

Thus, I think Capybara's behavior is correct:

  • If html source of .description is &lt;strong&gt;Lorem, user sees <strong>Lorem. Capybara searches this text for /\<.*\>/ and finds it.
  • If html source of .description is <strong>Lorem, user sees Lorem. Capybara searches this text for /\<.*\>/ and doesn't find it.

If you want to inspect source html of an element, you can use javascript's function innerHTML:

source_text = page.evaluate_script("document.querySelector('.description').innerHTML")
source_text.should_not =~ /\<.*\>/
Community
  • 1
  • 1
Andrei Botalov
  • 20,686
  • 11
  • 89
  • 123
  • That requires me to run these steps through Selenium, which is a lot slower and more complex (I already test some Js-interaction, but as little as possible). I think the `has_selector` route is preferrable, then: `page.should_not have_selector(".description strong")` – berkes Mar 22 '13 at 08:24
  • 1
    @berkes I didn't know which driver you use. Look at other answers in linked question. They are for RackTest – Andrei Botalov Mar 22 '13 at 08:57