fbpx
ODSC West 2019 Preview: Python for Data Acquisition ODSC West 2019 Preview: Python for Data Acquisition
Editor’s Note: See Phil present his talk “Python for Data Acquisition”  at ODSC West 2019. What does it take, on the technical side, to... ODSC West 2019 Preview: Python for Data Acquisition

Editor’s Note: See Phil present his talk “Python for Data Acquisition”  at ODSC West 2019.

What does it take, on the technical side, to get a project started? After you have an idea and find something you want to study or look into, you need to get some data. Where do you get data? Primary sources? Web sites? Database? There are many different sources and possibilities. Which ones should you choose? How can you trust that the data remains and allows for reproducibility? Will it be easy to update once new data becomes available? These are all just the beginnings of the issues involved in acquiring data for your project, but you can use Python for data acquisition to make it easier.

Data Source

There are so many sources of freely available public data. The US Federal Government runs Data.gov for its public data. The topics covered on this site include everything the government runs such as agriculture, climate, education, transportation, and energy. Individual divisions of the federal government, like NASA, may also have their own open data. Most states and cities also run web sites with a lot of data. ODSC West 2019 is in San Francisco and they have their own web site of local government data, Data SF.

python for data

Other governments and NGO’s have the same features

[Related Article: 25 Excellent Machine Learning Open Datasets]

  • European Union
  • Russia
  • UNICEF
  • Data World

Google has Public Data Directory, Amazon AWS Open Data, Microsoft, and IBM Cloud Data Services all have open data sets for public use. Github keeps track of so many more sites, like Awesome Public Datasets. With a little looking around, there is a set for almost anything you want to study!

Gathering Data

Even with this almost infinite supply of options, doesn’t mean that the data is ready to go for your application or model. You still need to actually download this data and parse it into a usable format. The data on these sites are stored in a variety of different formats. They range from GIS, CSV, XML, JSON, text, HTML and various binary types. It is quite possible for your project to need data from multiple sources and in multiple formats. This can create a variety of issues for any project getting started or continuing on.

Once we have this data, how do we hang on to it? For each application or model that is built on this, do we want to download it again? What happens if the website goes away or changes its policies or changes its format? Storing all of this data in your own database can ease this issue. Once you have downloaded, cleaned up and gotten your data ready, store it in a local database. From there all future applications and models only need to access the database without worrying about all of the other issues of getting the data.

This is where Python comes in! It can handle all of these tasks with the right libraries and some coding. Python has libraries to cover all of these topics and then some. Using the Requests Library, downloading web pages and other files is very simple. With the correct information, it can also log into a server for non-public or restricted data. If the files are compressed, Python has archiving libraries for this. For the various formats there are Python libraries such as CSV, JSON, and regular expressions. From here storing data in a database can be done by wrapping SQL in python via psycopg2 or creating ORMs in SQL Alchemy.

[Related Article: Jupyter Notebook: Python or R—Or Both?]

This Course on Python for Data Acquisition

The goal of this course, which I’ll be presenting at ODSC West 2019, is to expose all of the students to this process and give them a few labs where they will get to do this. The students will learn to parse various data file formats, download data and interact with a database for storing and retrieving data.

Phil Tracton

Phil Tracton

Phil Tracton did his BS in Electrical Engineering at University of Maryland and his MS at California State University Northridge. He has spent the last 17 years work on firmware and electronics for implantable medical equipment with Medtronic. He is currently a Principal IC Design En- gineer working on deep brain stimulus and implantable drug pumps. For the last 9 years he has been teaching at UCLA Extension. His list of classes include Python Programming 1, Embedded Software 1, and Using FPGA’s in Embedded Systems. He is finalizing his new class on using Python on a Raspberry Pi. In data science and machine learning he is interested in its applications in the biomedical device space. LinkedIn https://www.linkedin.com/in/phil-tracton-113553/ Github https://github.com/ptracton

1