This gets one URL and prints it in XML format.
import requests
from bs4 import BeautifulSoup
url = r'''https://secure.shippingapis.com/ShippingAPI.dll?API=Verify&XML=
<AddressValidateRequest USERID="564WILLC0589"><Address><Address1>
2451 Avalon Ct</Address1><Address2></Address2><City>Aurora</City>
<State>IL</State><Zip5></Zip5><Zip4></Zip4></Address></AddressValidateRequest>'''
#get the webpage
response = requests.get(url)
#see if the URL has been correctly encoded
r_url = response.text
#parse the downloaded page to get a beautifulsoup object
new_xml = BeautifulSoup(r_url, features = "xml").prettify()
print(new_xml)
>>>
<?xml version="1.0" encoding="utf-8"?>
<AddressValidateResponse>
<Address>
<Address2>
2451 AVALON CT
</Address2>
<City>
AURORA
</City>
<State>
IL
</State>
<Zip5>
60503
</Zip5>
<Zip4>
8574
</Zip4>
</Address>
</AddressValidateResponse>
>>>
But, I have a list of URLs that need to be printed in XML format. Using this list how can I pass one item at a time into requests.get()? Link to text file.
#text file that has all the URLs
txtfile = r'C:\Users\jpilbeam\USPSAPIWCHDUpdateAll.txt'
#convert text file into a list
with open (txtfile) as f:
x = (list(map(str.strip ,f.readlines())))
Solved! Go to Solution.
If x is a list of URL strings from the text file, you just iterate over x passing each value to requests and printing it:
for i in x:
response = requests.get(i)
If x is a list of URL strings from the text file, you just iterate over x passing each value to requests and printing it:
for i in x:
response = requests.get(i)
Thanks! That was it. There's a problem now with the URL in this script. I've posted another question about here: https://community.esri.com/thread/251431-missingschema-invalid-url-no-schema-supplied