Subjects
Activities
Tools
20 lessons ยท 8th Grade
ML is a subset of AI where models learn from data using optimization. The goal: minimize a loss function that measures prediction errors.
Hyperparameters (learning rate, batch size, layers) are not learned but set by engineers. Grid search and random search help find the best settings.
Ensemble methods combine multiple models for better results. Random forests use many decision trees. Boosting builds models that fix predecessors' errors.
Transfer learning takes a model trained on one task and adapts it for another. ImageNet pre-trained CNNs can be fine-tuned for medical image analysis.
Production ML needs data pipelines: collection, cleaning, feature extraction, training, evaluation, and monitoring. Each step requires engineering.
Fairness metrics include demographic parity, equalized odds, and calibration. No single metric captures all aspects of fairness โ tradeoffs exist.
Adversarial examples are inputs deliberately crafted to fool AI. A tiny, invisible change to a photo can make AI misclassify it completely.
XAI techniques like SHAP and LIME explain why a model made a specific prediction. Interpretability is crucial for trust and accountability.
Active learning has the model select which examples to label next, focusing on the most informative data points. This reduces labeling costs.
Federated learning trains models across multiple devices without sharing raw data. Your phone learns locally and only shares model updates.
MLOps applies DevOps practices to ML: version control for data and models, CI/CD for training, and monitoring for model drift in production.
Supervised learning maps inputs to outputs using labeled data. Algorithms include linear regression, decision trees, SVMs, and neural networks.
ML encompasses supervised, unsupervised, and reinforcement learning. Advanced topics include gradient descent, regularization, fairness, and deployment.
Unsupervised learning finds structure in unlabeled data. K-means clustering groups similar data points. PCA reduces dimensionality for visualization.
Semi-supervised uses a small labeled set plus large unlabeled data. Self-supervised creates labels from the data itself, like predicting the next word.
Feature engineering transforms raw data into useful inputs for models. Choosing the right features dramatically impacts model performance.
Gradient descent is the optimization algorithm that trains neural networks. It adjusts weights by following the slope of the loss function downhill.
Backpropagation calculates how each weight contributed to the error and adjusts them accordingly. It is the key algorithm enabling deep learning.
Regularization techniques (L1, L2, dropout) prevent overfitting by penalizing model complexity. They help models generalize to unseen data.
Cross-validation splits data into multiple folds, training on some and testing on others. It gives a more reliable estimate of model performance.
Your cart is empty
Browse our shop to find activities your kids will love