If you followed along with my analysis of the top Christmas movies according to the MovieLens 10M dataset, you would remember I obtained the list of Christmas movies by scraping this page. I am fairly new to web scraping and this is one of my first serious attempts to get data in this manner. However, you can see that even with a basic understanding of how to find information in webpages it is relatively easy to extract the information you need. Let’s get started!
Web scraping with Python is easy due to the many useful libraries available. A barebones installation isn’t enough for web scraping. One of the Python advantages is a large selection of libraries for web scraping. For this Python web scraping tutorial, we’ll be using three important libraries – BeautifulSoup v4, Pandas, and Selenium. This assumes that you have some basic knowledge of python and scrapy. If you are interested only in generating your dataset, skip this section and go to the sample crawl section on the GitHub repo. Gathering tweets URL by searching through hashtags. For searching for tweets we will be using the legacy Twitter website.
Setting up your virtualenv
The first step as always with Python analyses is to set up a virtualenv. If you’re unfamiliar with how to do this, this blog post explains how (and I promise you will never want to go back to system-installing packages!). Once you’re in your virtualenv, install the following packages:
We then enter !ipython notebook
into the command line and start a new notebook (alternatively you can use your preferred Python IDE).
Scraping the movie titles
Once we’re ready to go, we first import our newly installed packages.
The next thing we need to do is find out where our titles are in the page. This is pretty straightforward in a browser that supports developer tools, like Chrome. With Chrome, we simply need to go to our list of top 50 movies, right click, and select “Inspect”. This brings up the developer tools for the page.
Once you’ve done that the developer tools will open on the right of the screen. In the image above, I have highlighted a button that allows you to view the tags associated with any element of the page. If you click on this and select one of the movie titles, it will take you to the title tag, like so:
Ah ha! We can see that the first title, ‘The Santa Clause’, is tagged as div.feature-item__text h3 a
. However, looking through the rest of the movies (for example, ‘Joyeux Noël’) are tagged only as div.feature-item__text h3
. Huh, that creates some problems. To get around this, the function below checks whether the title tag contains an a
(anchor) element, and if so, looks in there for the title. Otherwise, it looks in the h3
element for the title.
Now that we’ve set up where to look for the titles, we can extract the data from the website. The requests.get()
function pulls the data from the website, and the lxml.html.fromstring(r.text)
command parses the html into the tree
variable.
We can now select the parts of the html we want. We can see in the screenshot above that the titles are contained within the article.feature-item
tag, therefore we select all data under this tag.
We can now apply our get_title
function to the items we pulled out using list comprehension. Let’s have a look at what we got:
Ok, this is a bit of a mess. To use it we need to clean it up using a bit of string manipulation.
Cleaning up the list of titles
The first major issue you can see is that a number of titles contain whitespace and newline escape characters (n
). We’ll get rid of the newlines by calling the replace
method, and the whitespace by calling the strip
method.
As we don’t need to preserve the unicode formatting, we’ll use a lazy method to get our apostrophes back. We first call the encode('utf8')
method, which converts the text into UTF-8 characters. We then call the replace
method again to change the UTF-8 string for apostrophes into an actual apostrophe.
If you followed my previous post on the Christmas analyses, you would have seen that titles that start with “The” or “A” have this leading article moved to the end of the title. For example, “The Santa Clause (1994)” is represented as “Santa Clause, The (1994)” in the MovieLens 10M dataset. To change all of these, I wrote two small loops, which first use a regex to check if the title starts with “The” or “A”, removes this word from the beginning of the sentence, and uses indexing to place it at the end of the title. The loop relies on the enumerate
function to get both the index and content of each item in the list.
Finally, I haven’t yet replaced the UTF-8 character for ‘ö’ in ‘Joyeux Noël’. However, as I know there are issues with the ‘ö’ character in the MovieLens 10M dataset as well, I’ll just truncate the whole title to ‘Joyeux’ which is sufficient to get an exact match between the two movie lists.
We now have a complete list! Let’s have a look at how it’s turned out:
The last thing left to do is to export the whole thing to a text file. In the case of my analysis, I then imported this into an SQL database with the MovieLens 10M dataset, and I will describe how I used it in my next blog post.
I hope this was a useful introduction which demystifies the basics of web scraping. You can see how using your browser’s developer tools to find the specific tags and the lxml
package to select the information you need makes it relatively straightforward to get the information you need. It is then a matter of using some pretty simple string manipulation to clean up the data, and voila! You have successfully scraped the data you need from a web page!
Scraping data from a website is one the most underestimated technical and business moves I can think of. Scraping data online is something every business owner can do to create a copy of a competitor’s database and analyze the data to achieve maximum profit. It can also be used to analyze a specific market and find potential costumers. The best thing is that it is all free of charge. Download clock screensaver mac. It only needs some technical skills which many people have these days. In this post I am going to do a step by step tutorial on how to scrape data from a website an saving it to a database. I will be using BeatifulSoup, a python library designed to facilitate screen scraping. I will also use MySQL as my database software. You can probably use the same code with some minor changes in order to use your desired database software.
Please note that scraping data from the internet can be a violation of the terms of service for some websites. Please do appropriate research before scraping data from the web and/or publishing the gathered data.
GitHup Page for This Project
I have also created a GitHub project for this blog post. I hope you will be able to use tutorial and code to customize it according to your need.
Requirements
- Basic knowledge of Python. (I will be using Python 3 but Python 2 would probably be sufficient too.)
- A machine with a database software installed. (I have MySQL installed on my machine)
- An active internet connection.
The following installation instructions are very basic. It is possible that the installation process for beautiful soup, Python etc. is a bit more complicated but since the installation is different on all different platforms and individual machines, it does not fit into the main object of this post, Scraping Data from a Website and saving it to a Database.
Installing Beautiful Soup

In order to install Beautiful Soup, you need to open terminal and execute one of the following commands according to your desired version of python. Please note that the following commands should probably be executed with “sudo” written before the actual line in order to give it administrative access.
For Python 2 (If pip did not work try it with pip2):
For Python 3:
If the above did not work, you can also use the following command.
For Python 2:
For Python 3:
If you were not able to install beautiful soup, just Google the term “How to install Beautiful Soup” and you will find plenty of tutorials.
Installing MySQLdb
In order to able to connect to your MySQL databases through Python, you will have to install MySQLdb library for your Python installation. Since Python 3 does not support MySQLdb at the time of this writing, you will need to use a different library. It is called mysqlclient which is basically a fork of MySQLdb with an added support for Python 3 and some other improvements. So using the library is basically identical to native MySQLdb for Python 2.
To install MySQLdb on Python 2, open terminal and execute the following command:
To install mysqlclient on Python 3, open terminal and execute the following command:
Installing requests
requests is a Python library which is used to load html data from a url. In order to install requests on your machine, follow the following instructions.
To install requests on Python 2, open terminal and execute the following command:
To install requests on Python 3, open terminal and execute the following command:
Now that we have everything installed and running, let’s get started.
Step by Step Guide on Scraping Data from a Single Web Page
I have created a page with some sample data which we will be scraping data from. Feel free to use this url and test your code. The page we will be scraping in the course of this article is https://howpcrules.com/sample-page-for-web-scraping/.
1. First of all we have to create a file called “scraping_single_web_page.py”.
2. Now, we will start by importing the libraries requests, MySQLdb and BeautifulSoup:
3. Let us create some variables where we will be saving our database connection data. To do so, add the below lines in your “” file.
4. We also need a variable where we will save the url to be scraped, into. After that, we will use the imported library “requests” to load the web page’s html plain text into the variable “plain_html_text”. In the next line, we will use BeautifulSoup to create a multidimensional array, “soup” which will be a big help to us in reading out the web page’s content efficiently.
Your whole code should be looking like this till now:
These few lines of code were enough to load the data from the web and parse it. Now we will start the task of finding the specific elements we are searching for. To do so we have to take a look at the page’s html and find the elements we are looking to save. All major web browsers offer the option to see the html plain text. If you were not able to see the html plain text on your browser, you can also add the following code the end of your “scraping_single_web_page.py” file to see the loaded html data in your terminal window.
To execute the code open terminal, navigate to the folder where you have your “scraping_single_web_page.py” file and execute your code with “python scraping_single_web_page.py” for Python 2 or “python3 scraping_single_web_page.py” respectively. Now, you will see the html data printing out in your terminal window.
5. Scroll down until you find my html comment “<!– Start Sample Data to be Scraped –>”. This is were the actual data we need starts. As you can see, the name of the class “Exercise: Data Structures and Algorithms” is written inside a <h3> tag. Since this is the only h3 tag in the whole page, we can use “soup.h3” to get this specific tag and its contents. We will now use the following command to get the whole tag where the name of the class is written in and save the content of the tag into the variable “name_of_class”. We will also use Python’s strip() function to remove all possible spaces to the left and right of the text. (Please note that the line print(soup.prettify()) was only there to print out the html data and can be deleted now.)
6. If you scroll down a little bit, you will see that the table with the basic information about the class is identified with summary=”Basic data for the event” inside its <table> tag. So we will save the parsed table in a variable called “basic_data_table”. If you take a closer look at the tags inside the table you will realize that the data itself regardless of its titles are saved inside <td> tags. These <td> tags have the following order from top to down:
According to the above, all text inside the <td> tags are relevant and need to be stored in appropriate variables. To do so we first have to parse all <td>s inside our table.
7. Now that we have all <td>s stored in the variable “basic_data_cells”, we have to go through the variable and save the data accordingly. (Please note that the index in arrays start from zero. So the numbers in the above picture will be shifted by one.)
8. Let’s continue with the course dates. Like the previous table, we have to parse the tables where the dates are written. The only difference is that for the dates, there is not only one table to be scraped but several ones. To do so we will use the following to scrape all tables into one variable:
9. We now have to go through all the tables and save the data accordingly. This means we have to create a for loop to iterate through the tables. In the tables, there is always one row (<tr> tag) as the header with <th> cells inside (No <td>s). After the header there can one to several rows with data that do interest us. So inside our for loop for the tables, we also have to iterate through the individual rows (<tr>s) where there is at least one cell (<td>) inside in order to exclude the header row. Then we only have to save the contents inside each cell into appropriate variables.
This all is translated into code as follows:
Please note that the above code reads the data and overrides them in the same variables over and over again so we have to create a database connection and save the data in each iteration.

Saving Scraped Data into a Database
Now, we are all set to create our tables and save the scraped data. To do so please follow the steps below.
10. Open your MySQL software (PhpMyAdmin, Sequel Pro etc.) on your machine and create a database with the name “scraping_sample”. You also have to create a user with name “scraping_sample_user”. Do not forget to at least give write privileges to the database “scraping_sample” for the user “scraping_user”.
11. After you have created the database navigate to the “scraping_sample” database and execute the following command in your MySQL command line.
Now you have two tables with the classes in the first one and the corresponding events in the second one. We have also created a foreign key from the events table to the classes table. We also added a constraint to delete the events associated with a class if the class is removed (on delete cascade).
We can go back to our code “scraping_single_web_page.py” and start with the process of saving data to the database.
12. In your code, navigate to the end of step 7, the line “language = basic_data_cells[12].text.strip()” and add the following below that to be able to save the class data:
Here, we use the MySQLdb library to establish a connection to the MySQL server and insert the data into the table “classes”. After that, we execute a query to get back the id of the just inserted class the value is saved in the “class_id” variable. We will use this id to add its corresponding events into the events table.
Web Scraping Tools
13. We will now save each and every event into the database. To do so, navigate to the end of step 9, the line “max_participants = cells[9].text.strip()” and add the following below it. Please note that the following have to be exactly below that line and inside the last if statement.
Basic Web Scraping In Python
here, we are using the variable “class_id” first mentioned in step 12 to add the events to the just added class.
Web Scraping Free
Scraping Data from Multiple Similar Web Pages
Web Scraping With Python
This is the easiest part of all. The code will work just fine if you have different but similar web pages you would like to scrape data from. Just put the whole code excluding the steps 1-3 in a for loop where the “url_to_scrape” variable is dynamically generated. I have created a sample script where the same page is scraped a few times over to elaborate this process. To check out the script and the fully working example of the above, navigate to my “Python-Scraping-to-Database-Sample” GitHub page.
