iterate over result pages using selenium and python: StaleElementReferenceException -


i think people understood selenium tool laugh maybe can share you're knowledge because want laugh now, too.

my code this:

def getzooverlinks(country):      global countries     countries = country      zooverweb = "http://www.zoover.nl/"     url = zooverweb + country      driver = webdriver.firefox()     driver.get(url)      button = driver.find_element_by_class_name('next')      links = []      page in xrange(1,4):         webdriverwait(driver, 60).until(lambda driver :driver.find_element_by_class_name('next'))         divlist = driver.find_elements_by_class_name('blue2')         div in divlist:             hreftag = div.find_element_by_css_selector('a').get_attribute('href')             print(hreftag)             newlink = zooverweb + hreftag             links.append(newlink)              button.click()             driver.implicitly_wait(10)          time.sleep(60)      return links 

so want iterate on result pages , links divs having class="blue2" , follow "next"-link next result page. staleelementreferenceexception saying: "message: element not found in cache - perhaps page has changed since looked up"

but layout of pages same. problem here? url after click not handed on driver since page changes too? how can that?

it little bit tricky follow pagination on particular site.

here set of things helped me overcome issue staleelementreferenceexception:

  • find elements inside loop since page changes
  • use explicit waits wait specific page numbers become active

working code:

from selenium import webdriver selenium.webdriver.support.wait import webdriverwait selenium.webdriver.common.by import selenium.webdriver.support import expected_conditions ec  country = "albanie" zooverweb = "http://www.zoover.nl/"  url = zooverweb + country  driver = webdriver.firefox() driver.get(url) driver.implicitly_wait(10)  links = [] page in xrange(1, 4):     # tricky part - waiting page number on top appear     if page > 1:         webdriverwait(driver, 60).until(ec.text_to_be_present_in_element((by.css_selector, 'div.entitypagingtop strong'), str(page)))     else:         webdriverwait(driver, 60).until(ec.visibility_of_element_located((by.class_name, 'next')))      divlist = driver.find_elements_by_class_name('blue2')     div in divlist:         hreftag = div.find_element_by_css_selector('a').get_attribute('href')         newlink = zooverweb + hreftag         links.append(newlink)      driver.find_element_by_class_name("next").click()  print links 

Comments

Popular posts from this blog

tcpdump - How to check if server received packet (acknowledged) -