**Machine Learning**

# Comprehensive Guides on Machine Learning

*schedule*Mar 5, 2023

*toc*Table of Contents

*expand_more*

**interactive map of data science**

# What is this series about?

Welcome to our comprehensive machine learning series 👋! The goal of this series is to give you a deeper and intuitive understanding of important concepts in machine learning. Our guides are filled with simple but insightful examples, which is a style of teaching we strongly believe in.

We aim to publish a new guide per week, and we also routinely improve our existing guides with more examples and sections.

# Machine learning models

Machine learning (ML) models are algorithms that learn from data to achieve tasks such as predictions, classification and pattern recognition. Although ML models can easily be implemented using libraries such as scikit-learn, we recommend that you take the time to understand the underlying mathematics so that you understand each model's strengths and weaknesses.

# Feature engineering

Data in the real world is messy and must be cleaned before model training. No matter how advanced a model is, training with low-quality data will always yield low-quality output. Feature engineering is about transforming the features to not only align their format to what the model expects, but also to improve the performance and training speed of the model.

# Optimization

Most ML models work by minimizing a particular cost function, which is simply a mathematical expression that represents how poor a model is performing. In fact, the "learning" in "machine learning" is nothing more than minimizing the cost function through some optimizer such as gradient descent.

# Evaluating machine learning models

Once we build a ML model, we must evaluate its performance to check whether it aligns with our expectations. The evaluation metric depends on the nature of our task (e.g. classification or regression). We can also compare the performance of different ML models, and keep the best-performing model.

# PySpark

Spark is an open-source computing framework designed to handle big data. The basic idea is to split up the data into smaller partitions, and place them on multiple machines to facilitate parallel computing. PySpark is a wrapper language for Spark that allows us to interact with the framework using Python.