The Requests library is one of the most powerful and user-friendly HTTP libraries available in Python. It is widely used for making HTTP requests and handling responses in a simple and human-friendly manner. Whether you're interacting with a REST API, scraping web content, or downloading files, Requests makes it easy to send HTTP/1.1 requests and work with responses.
Before Requests, handling HTTP connections in Python required complex and verbose code using modules like urllib and http.client. Requests abstracts these complexities and provides a clean, consistent interface.
Requests is not included in the standard Python library, so it needs to be installed separately using pip:
pip install requests
To verify installation:
import requests print(requests.__version__)
import requests
response = requests.get("https://www.example.com")
print(response.status_code)
print(response.text)
print(response.status_code) # HTTP status code print(response.headers) # Response headers print(response.url) # Final URL after redirection print(response.content) # Binary content print(response.encoding) # Encoding used by response
The Requests library supports all major HTTP methods:
data = {'username': 'john', 'password': 'doe'}
response = requests.post("https://httpbin.org/post", data=data)
print(response.text)
data = {'name': 'Updated Name'}
response = requests.put("https://httpbin.org/put", data=data)
response = requests.delete("https://httpbin.org/delete")
params = {'search': 'python', 'page': 2}
response = requests.get("https://example.com/search", params=params)
print(response.url)
payload = {'key1': 'value1', 'key2': 'value2'}
response = requests.post("https://httpbin.org/post", data=payload)
print(response.json())
json_data = {'name': 'Alice', 'email': 'alice@example.com'}
response = requests.post("https://httpbin.org/post", json=json_data)
print(response.json())
response = requests.get("https://api.github.com")
data = response.json()
print(data['current_user_url'])
You can modify request headers by passing a dictionary:
headers = {
'User-Agent': 'my-app/0.0.1',
'Accept': 'application/json'
}
response = requests.get("https://httpbin.org/headers", headers=headers)
print(response.json())
Timeouts help avoid hanging requests. Always set a timeout:
try:
response = requests.get("https://httpbin.org/delay/5", timeout=3)
except requests.exceptions.Timeout:
print("The request timed out")
cookies = {'session_id': '123abc'}
response = requests.get("https://httpbin.org/cookies", cookies=cookies)
print(response.text)
response = requests.get("https://httpbin.org/cookies/set?name=value")
print(response.cookies)
The Session object allows you to persist cookies and headers across multiple requests.
s = requests.Session()
s.headers.update({'User-Agent': 'my-app'})
s.get("https://httpbin.org/cookies/set/sessioncookie/123456789")
r = s.get("https://httpbin.org/cookies")
print(r.text)
Requests provides support for HTTP Basic and Digest authentication.
from requests.auth import HTTPBasicAuth
response = requests.get("https://httpbin.org/basic-auth/user/pass", auth=HTTPBasicAuth('user', 'pass'))
print(response.status_code)
from requests.auth import HTTPDigestAuth
response = requests.get("https://httpbin.org/digest-auth/auth/user/pass", auth=HTTPDigestAuth('user', 'pass'))
print(response.status_code)
By default, Requests automatically follows redirects:
response = requests.get("http://github.com")
print(response.url) # Final URL
print(response.history) # List of Response objects for each redirect
response = requests.get("http://github.com", allow_redirects=False)
print(response.status_code)
files = {'file': open('test.txt', 'rb')}
response = requests.post("https://httpbin.org/post", files=files)
print(response.text)
url = 'https://example.com/image.jpg'
r = requests.get(url)
with open('image.jpg', 'wb') as f:
f.write(r.content)
proxies = {
'http': 'http://10.10.1.10:3128',
'https': 'http://10.10.1.10:1080',
}
response = requests.get("http://example.com", proxies=proxies)
try:
response = requests.get("https://example.com", timeout=5)
response.raise_for_status()
except requests.exceptions.HTTPError as errh:
print("Http Error:", errh)
except requests.exceptions.ConnectionError as errc:
print("Error Connecting:", errc)
except requests.exceptions.Timeout as errt:
print("Timeout Error:", errt)
except requests.exceptions.RequestException as err:
print("OOps: Something Else", err)
# Default (verifies SSL)
response = requests.get("https://example.com")
# Disable SSL verification (not recommended)
response = requests.get("https://example.com", verify=False)
from requests.adapters import HTTPAdapter
s = requests.Session()
s.mount("https://", HTTPAdapter(max_retries=3))
response = s.get("https://example.com")
The Requests library is heavily used to interact with APIs such as RESTful web services.
url = "https://api.github.com/users/octocat"
response = requests.get(url)
data = response.json()
print("Username:", data['login'])
print("ID:", data['id'])
print("URL:", data['html_url'])
Many APIs enforce rate limits. Respect these by checking headers like:
response.headers.get('X-RateLimit-Remaining')
response.headers.get('Retry-After')
Although both can handle HTTP, Requests is more concise and easier to use than urllib:
| Feature | urllib | requests |
|---|---|---|
| Ease of Use | Low | High |
| Built-in | Yes | No |
| Sessions | Manual | Built-in |
| JSON Handling | Manual | Built-in |
The Python Requests library is a simple yet powerful tool for interacting with HTTP servers. It abstracts the complexities of making requests behind a clean and user-friendly API. Whether you are fetching data from a website, communicating with a RESTful API, uploading files, or managing sessions and cookies, Requests makes your life easier.
Its wide adoption and extensive documentation make it a favorite among Python developers. For more advanced needs like browser automation or scraping JavaScript-heavy websites, consider integrating it with other tools such as BeautifulSoup, Selenium, or using frameworks like Scrapy.
Python is commonly used for developing websites and software, task automation, data analysis, and data visualisation. Since it's relatively easy to learn, Python has been adopted by many non-programmers, such as accountants and scientists, for a variety of everyday tasks, like organising finances.
Learning Curve: Python is generally considered easier to learn for beginners due to its simplicity, while Java is more complex but provides a deeper understanding of how programming works.
The point is that Java is more complicated to learn than Python. It doesn't matter the order. You will have to do some things in Java that you don't in Python. The general programming skills you learn from using either language will transfer to another.
Read on for tips on how to maximize your learning. In general, it takes around two to six months to learn the fundamentals of Python. But you can learn enough to write your first short program in a matter of minutes. Developing mastery of Python's vast array of libraries can take months or years.
6 Top Tips for Learning Python
The following is a step-by-step guide for beginners interested in learning Python using Windows.
Best YouTube Channels to Learn Python
Write your first Python programStart by writing a simple Python program, such as a classic "Hello, World!" script. This process will help you understand the syntax and structure of Python code.
The average salary for Python Developer is βΉ5,55,000 per year in the India. The average additional cash compensation for a Python Developer is within a range from βΉ3,000 - βΉ1,20,000.
Copyrights © 2024 letsupdateskills All rights reserved