I'd like to have this script loop through a list of URLs and use a web scraper to search certain things from each of these websites. I have my cursor set up as well as Beautiful soup, but I'm wondering how I identify each item in the list? Can I attach an index number to each one somehow? Here's what I have. If I run this it will print the URLs. I've used Beautiful Soup to find things in HTML before, but I'm not sure how to find things from URLs in a list?
from bs4 import BeautifulSoup
import os, arcpy
#the hosted layer with website urls in 'Website' field
fc = r'https://services.arcgis.com/fGsbyIOAuxHnF97m/arcgis/rest/services/Grab_Go_School_Meals_Location_(View)/FeatureServer/0?token=fbpCfA34sTJ4rzWO2TQn_c38B4TfGWOZ6jTMeFL1m7CNKd9_odI1t_t_hL-YvvePbE3M428FRT-zW-bISRYrGdJ2CnloKrHoHAfMnbGXpJ-5-zZBU6ONK1u0hMv5D-Vy-fnRpqpQP3aiQEke8L9d9jxDVBKWPamqCa0z0ko4IZX3xpIpHPSEKpmwpcJEaK7Z_rai3IBsT5-tqfMKIxnGCwe4SZZED8bDZM9j1T55-LggpjCgpwqWODs4vpj58iMy'
#query the webpage and return the html to the variable'soup'
html = urllib.request.urlopen(url)
#parse the downloaded homepage and grab all text
soup = BeautifulSoup(html, 'html.parser')
#use current time to detect change
t = time.ctime()
#fc field where URLs are stored
field = ["Website"]
with arcpy.da.SearchCursor(fc, field) as cursor:
for row in cursor:
##count the number of '<h1>' tags in HTML
n = len(soup.find_all('h2'))
#the text of the 26th '<h2>' tag
atts = soup.find_all('h2').text