There is one session available:
There is one session available:
Understanding the World Through Data
About this courseSkip About this course
Speech recognition, drones, and self-driving cars – things that once seemed like pure science fiction – are now widely available technologies, and just a few examples of how humans have taught machines to analyze data and make decisions. In this hands-on, introductory course, you will examine all the forms in which data exists, learn tools that uncover relationships between data, and leverage basic algorithms to understand the world from a new perspective.
Whether you're a high school student or someone switching careers, all you need to get started in this course is a curiosity about the topic of machine learning and a willingness to tinker around with your computer.
The course is taught by modules. Within each module, you'll have access to videos, short exercises, and a final capstone project. In Module 1, you'll begin by looking at different kinds of data. To help you explore the data, you'll dive right into some programming with the Python programming language. You don't need to have any programming background, we will guide you on how to leverage Python to explore and visualize any data.
One kind of data you'll work with is data that relates one variable to another. Coming up with a relationship between two variables—one depending on the other—is at the center of Module 2. In that module, you'll build up some core concepts before seeing your first machine learning algorithm. The goal is to use programming to create models that describe mathematical relationships between data. You'll be able to see how good the model is and use it to make predictions about new data.
In Module 3, you'll see a discussion about where imperfections in collected data might come from. You rarely have perfectly “clean” data sets, so it's important to understand how imperfections impact the model that an algorithm might come up with. To this end, we will introduce the notion of data distributions and build up to the concepts of biased and unbiased noise.
Another kind of data you'll work with is data that belongs in different groups (or classes). Creating a model that predicts what group data belongs in is at the center of Module 4. You'll work through different ways of thinking about this problem and see three different ways of approaching making such groupings (classification).
At a glance
What you'll learnSkip What you'll learn
- Python programming and the Colab notebook programming environment
- Dependent and independent variables
- Coming up with relationships between data using linear and polynomial regression models
- Recognizing how data is distributed
- How to observe noise in distributions and when to ignore it
- Categorize data into groups with classification models
- And more!
Module 1: How to represent and manipulate data
- Examples of numerical data
- The Python programming language and the Colab notebook programming environment
- Loading datafiles in Colab as dataframes and performing simple operations (selecting rows or columns, filtering data by specific conditions, grouping data, applying functions on the resulting groups)
- Finding the correlation between columns of the dataframe
- Visualizing the data using line plots, scatter plots, histograms, correlation matrix
Module 2: Reverse engineering nature
- Dependent and independent variables and how they correspond to real life scenarios
- Intuition for what a linear model is
- Intuition for what a polynomial model is
- Python libraries that can perform the linear regression on data
- Compare the quality of different models (mean-squared-error and R^2 values)
- Fitting higher order polynomials
Module 3: Distributions and Latent Variables
- Uniform distributions
- Gaussian distributions
- Distribution mean and standard deviation
- Noise in distributions (biased and unbiased noise)
Module 4: How machines think
- Categorizing data based on particular conditions being met
- Using linear regression to classify a new datapoint as above or below the best fit line
- Using a support vector classifier to separate two groups of data and classifying a new datapoint into a group
- Using logistic regression to classify data into two groups and finding the probabilities of a new datapoint falling into each group
- Understanding how to divide data into training and test sets
About the instructors
Frequently Asked QuestionsSkip Frequently Asked Questions
Do I need to know any programming to take this course?
Is there a textbook for this course?