AI & ML Interview Roadmap: A Step-by-Step Study Guide (2025)
AI & ML Interview Roadmap: A Step-by-Step Study Guide (2025)
7 min read
1296 words
๐ Usage Guidelines
This mindmap serves as an interactive, visual syllabus for ML/GenAI research and engineering roles. Each node links to a focused page for quick skimming or deep dives.
๐ Zoom In / Out Tips:
Use your mouse scroll wheel or trackpad to zoom in and out.
Click and drag anywhere on the canvas to pan across the map.
# ML/GenAI Research Preparation
## ๐ Math Foundations
### Probability & Distributions
- [Conditional Probability](/core-skills/prob-and-stats/theories/1.2-conditional-probability-and-independence)
- [Bayes Theorem](/core-skills/prob-and-stats/theories/1.3-bayes-theorem-and-bayesian-reasoning)
- [Permutations & Combinations](/core-skills/prob-and-stats/theories/1.4-combinatorics-and-counting)
- [Bernoulli, Binomial, Poisson Distributions](/core-skills/prob-and-stats/theories/2.1-core-discrete-distributions)
- [Gaussian & Uniform Distributions](/core-skills/prob-and-stats/theories/2.2-core-continuous-distributions)
- [PDF vs CDF](/core-skills/prob-and-stats/theories/2.2-core-continuous-distributions)
- [Mean and Variance](/core-skills/prob-and-stats/theories/3.1-sampling-and-estimation)
### Statistics & Inference
- [Law of Large Numbers](/core-skills/prob-and-stats/theories/3.1-sampling-and-estimation)
- [Central Limit Theorem](/core-skills/prob-and-stats/theories/3.1-sampling-and-estimation)
- [Maximum Likelihood Estimation (MLE)](/core-skills/prob-and-stats/theories/3.2-maximum-likelihood-estimation-mle)
- [Hypothesis Testing & p-value](/core-skills/prob-and-stats/theories/3.3-hypothesis-testing)
- [z-test & t-test](/core-skills/prob-and-stats/theories/3.3-hypothesis-testing)
- [Confidence Intervals (CI)](/core-skills/prob-and-stats/theories/3.4-confidence-intervals)
- [Type I vs Type II Error](/core-skills/prob-and-stats/theories/3.3-hypothesis-testing)
- [A/B Testing](/core-skills/prob-and-stats/theories/5.3-experimental-design)
### Linear Algebra
- [Vectors & Operations](/core-skills/maths/theories/1.1-vectors-and-operations)
- [Matrix Multiplication & Transpose](/core-skills/maths/theories/1.2-matrix-operations)
- [Determinant, Inverse, Rank](/core-skills/maths/theories/1.3-determinants-inverses-rank)
- [Eigenvalues & Eigenvectors](/core-skills/maths/theories/1.4-eigenvalues-eigenvectors-svd)
- [PCA & SVD](/core-skills/maths/theories/5.4-pca-svd-dimensionality-reduction)
### Calculus & Optimization
- [Limits & Continuity](/core-skills/maths/theories/2.1-limits-continuity-differentiability)
- [Derivatives & Gradients](/core-skills/maths/theories/2.2-derivatives-and-gradients)
- [Chain Rule & Backpropagation](/core-skills/maths/theories/2.3-chain-rule-and-backpropagation)
- [Convexity & Optimization Landscapes](/core-skills/maths/theories/2.4-optimization-and-convexity)
- [Gradient Descent](/core-skills/maths/theories/5.3-gradient-based-optimization)
### Information Theory
- [Entropy, Cross-Entropy, KL Divergence](/core-skills/maths/theories/4.1-entropy-cross-entropy-kl-divergence)
- [Mutual Information](/core-skills/maths/theories/4.2-mutual-information)
### Statistical Learning Foundations
- [Bias-Variance Tradeoff](/core-skills/maths/theories/5.1-bias-variance-tradeoff)
- [Regularization](/core-skills/maths/theories/5.2-regularization)
- [Gradient-Based Optimization](/core-skills/maths/theories/5.3-gradient-based-optimization)
- [PCA & Dimensionality Reduction](/core-skills/maths/theories/5.4-pca-svd-dimensionality-reduction)
## ๐ค Machine Learning
### Core Concepts
- [Bias-Variance Tradeoff](/machine-learning/core/theories/1-bias-variance-tradeoff)
- [Overfitting vs Underfitting](/machine-learning/core/theories/2-overfitting-underfitting)
- [L1 vs L2 Regularization](/machine-learning/core/theories/3-regularization)
- [Cross-Validation Techniques](/machine-learning/core/theories/4-cross-validation)
- [Evaluation Metrics: Accuracy, Precision, Recall, F1, ROC-AUC](/machine-learning/core/theories/5-evaluation-metrics)
### Linear Models
- [Linear Regression](/machine-learning/linear-models/linear-regression)
- [Logistic Regression](/machine-learning/linear-models/logistic-regression)
- [Mean Squared Error (MSE)](/machine-learning/linear-models/loss-functions/theories/1-mean-squared-error-mse)
- [Cross-Entropy Loss (Binary & Categorical)](/machine-learning/linear-models/loss-functions/theories/6-categorical-cross-entropy)
- [Gradient Descent Optimization](/machine-learning/linear-models/gradient-descent-optimization)
### Trees & Ensembles
- [Decision Trees: Gini vs Entropy](/machine-learning/trees/decision-trees)
- [Random Forest (Bagging)](/machine-learning/trees/random-forest)
- [Gradient Boosting](/machine-learning/trees/gradient-boosting)
- [XGBoost](/machine-learning/trees/xgboost)
### SVMs & Kernels
- [Core Intuition & Geometry](/machine-learning/svm/theories/1.1-grasp-the-core-intuition-and-geometry)
- [Hard vs Soft Margin](/machine-learning/svm/theories/1.2-hard-vs-soft-margin-tradeoff)
- [Kernel Trick & Nonlinear Spaces](/machine-learning/svm/theories/2.1-the-kernel-trick-nonlinear-spaces)
- [Polynomial vs RBF Kernel Tradeoffs](/machine-learning/svm/theories/2.2-polynomial-vs-rbf-kernel-tradeoffs)
- [Hyperparameter Tuning & Regularization](/machine-learning/svm/theories/3.1-hyperparameter-tuning-and-regularization)
### Feature Engineering
- [Handling Missing Values](/machine-learning/feature-engineering/theories/2.1-handling-missing-values)
- [Normalization (MinMax Scaling)](/machine-learning/feature-engineering/theories/3.1-normalization-min-max-scaling)
- [Standardization (Z-score)](/machine-learning/feature-engineering/theories/3.2-standardization-zscore-scaling)
- [Robust & Log Scaling](/machine-learning/feature-engineering/theories/3.3-robust-log-and-power-scaling)
- [One-Hot Encoding](/machine-learning/feature-engineering/theories/4.1-one-hot-encoding)
- [Label & Ordinal Encoding](/machine-learning/feature-engineering/theories/4.2-label-and-ordinal-encoding)
- [Target & Frequency Encoding](/machine-learning/feature-engineering/theories/4.3-target-frequency-binary-encoding)
- [Outlier Detection (Z-score, IQR, Isolation Forest)](/machine-learning/feature-engineering/theories/5.3-advanced-outlier-methods)
### Unsupervised Learning
- [K-Means Clustering](/machine-learning/unsupervised/kmeans)
- [PCA (Dimensionality Reduction)](/machine-learning/unsupervised/pca)
- [HDBSCAN (Density Clustering)](/machine-learning/unsupervised/hdbscan)
- [UMAP (Manifold Learning)](/machine-learning/unsupervised/umap)
### Recommendation Systems
- [Collaborative & Content-Based Filtering](/machine-learning/recommendation-systems)
### Time Series
- [Understanding Temporal Dependencies](/machine-learning/time-series/theories/1-understanding-temporal-dependencies)
- [Stationarity & Differencing](/machine-learning/time-series/theories/2-stationarity-and-differencing)
- [ACF & PACF](/machine-learning/time-series/theories/3-acf-and-pacf)
- [ARIMA & SARIMA Models](/machine-learning/time-series/theories/5-sarima)
- [Facebook Prophet Forecasting](/machine-learning/time-series/theories/6-facebook-prophet)
- [Feature Engineering for Time Series](/machine-learning/time-series/theories/7-feature-engineering)
- [Forecast Evaluation Metrics](/machine-learning/time-series/theories/8-forecast-metrics)
## ๐ง Deep Learning
### Neural Network Fundamentals
- [Neural Network Fundamentals](/deep-learning/core/theories/1-neural-networks-fundamentals)
- [Perceptron & Multi-Layer Perceptron (MLP)](/deep-learning/core/theories/2-perceptron-and-mlp)
- [Forward Propagation](/deep-learning/core/theories/3-forward-propagation)
- [Backpropagation Algorithm](/deep-learning/core/theories/4-backpropagation)
- [Activation Functions (ReLU, Sigmoid, Tanh, Softmax)](/deep-learning/core/theories/5-activation-functions)
- [Gradient Descent Variants (SGD, Momentum, Adam)](/deep-learning/core/theories/6-optimization-algorithms)
- [Weight Initialization (Xavier, He)](/deep-learning/core/theories/7-weight-initialization)
- [Batch Normalization](/deep-learning/core/theories/8-batch-normalization)
- [Regularization (Dropout, L1/L2)](/deep-learning/core/theories/9-regularization-techniques)
- [Gradient Exploding & Vanishing](/deep-learning/core/theories/10-gradient-exploding-vanishing)
- [Training Tricks (Early Stopping, LR Scheduling)](/deep-learning/core/theories/11-training-tricks)
- [Common Loss Functions](/deep-learning/core/theories/12-common-loss-functions)
### Convolutional Neural Networks (CNNs)
- [Convolution, Padding, Stride](/deep-learning/cnn/theories/1-convolution-padding-stride)
- [Pooling & Feature Reduction](/deep-learning/cnn/theories/2-pooling-and-feature-reduction)
- [CNN Architecture Design](/deep-learning/cnn/theories/3-cnn-architecture-design)
- [CNN Backpropagation](/deep-learning/cnn/theories/4-cnn-backpropagation)
- [Transfer Learning & Fine-Tuning](/deep-learning/cnn/theories/5-transfer-learning-and-fine-tuning)
- [CNN Regularization & Data Augmentation](/deep-learning/cnn/theories/6-regularization-and-augmentation)
- [Modern CNN Architectures (VGG, ResNet, Inception, EfficientNet)](/deep-learning/cnn/theories/7-modern-cnn-architectures)
### Recurrent Neural Networks (RNNs)
- [RNN Fundamentals](/deep-learning/rnn/theories/1-rnn-fundamentals)
- [Vanishing Gradient Problem](/deep-learning/rnn/theories/2-vanishing-gradient-problem)
- [LSTM (Long Short-Term Memory)](/deep-learning/rnn/theories/3-lstm)
- [GRU (Gated Recurrent Unit)](/deep-learning/rnn/theories/4-gru)
- [Bidirectional RNN](/deep-learning/rnn/theories/5-bidirectional-rnn)
- [Sequence-to-Sequence Models](/deep-learning/rnn/theories/6-seq2seq-models)
- [Advanced RNN Variants (IndRNN, SRU)](/deep-learning/rnn/theories/7-advanced-rnn-variants)
### Transformers & Attention
- [Introduction to Attention Mechanisms](/deep-learning/transformers/theories/1-introduction-to-attention)
- [Self-Attention Mechanism](/deep-learning/transformers/theories/2-self-attention-mechanism)
- [Multi-Head Attention](/deep-learning/transformers/theories/3-multi-head-attention)
- [Positional Encoding](/deep-learning/transformers/theories/4-positional-encoding)
- [Transformer Encoder-Decoder Architecture](/deep-learning/transformers/theories/5-transformer-architecture)
- [BERT, GPT & Modern Transformers](/deep-learning/transformers/theories/6-bert-gpt-and-modern-transformers)
- [Vision Transformers (ViT)](/deep-learning/transformers/theories/7-vision-transformers-vit)
- [Attention Interpretability & Visualization](/deep-learning/transformers/theories/8-attention-interpretability)
### Optimization & Regularization
- [Loss Functions Overview](/deep-learning/optimization/theories/1-loss-functions-overview)
- [Optimizers (SGD, RMSProp, Adam)](/deep-learning/optimization/theories/2-optimizers-sgd-rmsprop-adam)
- [Learning Rate Scheduling](/deep-learning/optimization/theories/3-learning-rate-schedules)
- [Gradient Clipping](/deep-learning/optimization/theories/4-gradient-clipping)
- [Weight Decay & Early Stopping](/deep-learning/optimization/theories/5-weight-decay-and-early-stopping)
### Autoencoders & Representation Learning
- [Autoencoder Architecture](/deep-learning/autoencoders/theories/1-autoencoder-architecture)
- [Variational Autoencoder (VAE)](/deep-learning/autoencoders/theories/2-variational-autoencoder)
- [Denoising Autoencoders](/deep-learning/autoencoders/theories/3-denoising-autoencoder)
- [Sparse Autoencoders](/deep-learning/autoencoders/theories/4-sparse-autoencoder)
- [Contrastive Learning (SimCLR, BYOL)](/deep-learning/representation-learning/theories/1-contrastive-learning)
- [Self-Supervised Learning](/deep-learning/representation-learning/theories/2-self-supervised-learning)
### Generative & Diffusion Models
- [GAN Fundamentals](/deep-learning/gans/theories/1-gan-fundamentals)
- [Generator & Discriminator](/deep-learning/gans/theories/2-generator-and-discriminator)
- [GAN Training Instability](/deep-learning/gans/theories/3-gan-training-stability)
- [Conditional GANs](/deep-learning/gans/theories/4-conditional-gan)
- [DCGAN](/deep-learning/gans/theories/5-dcgan)
- [CycleGAN](/deep-learning/gans/theories/6-cyclegan)
- [Diffusion Models](/deep-learning/diffusion-models/theories/1-diffusion-models-intro)
- [Stable Diffusion](/deep-learning/diffusion-models/theories/2-stable-diffusion)
- [Latent Diffusion Models](/deep-learning/diffusion-models/theories/3-latent-diffusion-models)
## ๐งช ML System Design
### ML Lifecycle
- [Understand the Lifecycle Stages](/system-design/lifecycle/theories/1.1-understand-the-lifecycle-stages)
- [Data Strategy & Infrastructure](/system-design/lifecycle/theories/1.3-data-strategy-and-infrastructure)
- [Monitoring & Feedback Loops](/system-design/lifecycle/theories/1.8-monitoring-and-feedback-loops)
### Infrastructure
- [Model Registry & Versioning](/system-design/infrastructure/theories/2.1-understand-model-versioning)
- [CI/CD for ML Pipelines](/system-design/infrastructure/theories/3.2-build-ml-deployment-pipeline)
- [Feature Store Design](/system-design/infrastructure/theories/4.1-core-concepts-of-feature-store)
### Monitoring
- [Data Drift](/system-design/monitoring/theories/1.2-data-drift)
- [Concept Drift](/system-design/monitoring/theories/1.3-concept-drift)
- [Model Performance Monitoring](/system-design/monitoring/theories/1.4-model-performance-monitoring)
- [Alerting & Retraining Triggers](/system-design/monitoring/theories/1.5-alerting-retraining-triggers)
### Design Patterns
- [Batch vs Real-Time Processing](/system-design/patterns/theories/1.1-batch-vs-realtime-processing)
- [Latency vs Throughput Trade-offs](/system-design/patterns/theories/1.2-latency-vs-throughput)
- [Shadow Deployment vs A/B Testing](/system-design/patterns/theories/1.3-shadow-vs-ab-testing)
### System Architecture
- [End-to-End ML System Anatomy](/system-design/architecture/theories/1.1-end-to-end-ml-system-anatomy)
- [Real-Time vs Batch System Tradeoffs](/system-design/architecture/theories/1.5-real-time-vs-batch-system-tradeoffs)
- [End-to-End ML System Design](/system-design/architecture/theories/1.10-end-to-end-ml-system-design)
- [Fraud Detection System Blueprint](/system-design/architecture/theories/2.2-fraud-detection-system)
- [Recommendation System Pipeline](/system-design/architecture/theories/2.3-recommendation-system)
- [Real-Time Ads Ranking System](/system-design/architecture/theories/2.4-real-time-ads-ranking-system)
## ๐๏ธ SQL + Analytics
### Core SQL Concepts & Optimization
- [Filter and Aggregate Like an Analyst](/core-skills/sql/core/theories/1.2-Filter-and-Aggregate-Like-an-Analyst)
- [Combine Tables with Joins](/core-skills/sql/core/theories/1.4-Combine-Tables-with-Joins)
- [Conditional Logic & Nulls](/core-skills/sql/core/theories/1.3-Nulls-Case-and-Conditional-Logic)
- [Read & Interpret Execution Plans](/core-skills/sql/theories/2.1-Read-and-Interpret-Execution-Plans)
- [Indexing for Speed](/core-skills/sql/theories/2.2-Indexing-for-Speed)
- [Query Refactoring & CTEs](/core-skills/sql/theories/2.3-Query-Refactoring-and-CTEs)
- [Avoiding Common SQL Pitfalls](/core-skills/sql/theories/2.4-Avoiding-Common-Pitfalls)
### Analytical SQL & Window Functions
- [Window Functions & Analytics](/core-skills/sql/theories/3.1-Window-Functions-and-Analytics)
- [Cohort & Retention Analysis](/core-skills/sql/theories/3.2-Cohort-and-Retention-Analysis)
- [Time-Series & Lag Analysis (LEAD, LAG)](/core-skills/sql/theories/3.3-Time-Series-and-Lag-Analysis)
### Database Design & Modeling
- [Understand Normal Forms](/core-skills/sql/theories/4.1-Understand-Normal-Forms)
- [Star & Snowflake Schemas](/core-skills/sql/theories/4.2-Star-and-Snowflake-Schemas)
- [Data Integrity & Keys](/core-skills/sql/theories/4.3-Data-Integrity-and-Keys)
### Real-World Analytics Practice
- [StrataScratch SQL Practice](/core-skills/sql/practice/stratascratch)
- [LeetCode SQL Interview Questions](/core-skills/sql/practice/leetcode)
## ๐งฌ Generative AI & LLMs
### Transformers from Scratch
- [Scaled Dot-Product Attention](/generative-ai/transformers/scratch/scaled-attention)
- [Positional Encodings](/generative-ai/transformers/scratch/positional-encoding)
- [Multi-Head Self-Attention](/generative-ai/transformers/scratch/multi-head-attention)
### HuggingFace Applications
- [Sentiment Classification with Transformers](/generative-ai/huggingface/sentiment-classification)
- [Named Entity Recognition (NER)](/generative-ai/huggingface/ner)
- [Question Answering with Transformers](/generative-ai/huggingface/question-answering)
### Generative Adversarial Networks (GANs)
- [Generator vs Discriminator](/generative-ai/gans/generator-discriminator)
- [Mode Collapse](/generative-ai/gans/mode-collapse)
- [Minimax Loss Function](/generative-ai/gans/minimax-loss)
### Reinforcement Learning
- [Q-Learning Algorithm](/generative-ai/rl/q-learning)
- [SARSA Algorithm](/generative-ai/rl/sarsa)
- [Bellman Equation](/generative-ai/rl/bellman)
- [Deep Q-Network (DQN)](/generative-ai/rl/dqn)
- [Policy vs Value-Based Methods](/generative-ai/rl/policy-vs-value)
### Large Language Models (LLMs)
#### Foundations & Training
- [LLM Architecture: Encoder, Decoder, Attention](/generative-ai/large-language-models/foundations-and-training/theories/1.1-llm-architecture)
- [Tokenization: BPE, WordPiece, SentencePiece](/generative-ai/large-language-models/foundations-and-training/theories/1.2-tokenization)
- [Embeddings: Static vs Contextual](/generative-ai/large-language-models/foundations-and-training/theories/1.3-embeddings)
- [Modeling Objectives: Causal & Masked](/generative-ai/large-language-models/foundations-and-training/theories/1.4-modeling-objectives)
- [Pretraining vs Fine-Tuning](/generative-ai/large-language-models/foundations-and-training/theories/2.1-pretraining-vs-finetuning)
- [Supervised Fine-Tuning (SFT)](/generative-ai/large-language-models/foundations-and-training/theories/2.2-supervised-finetuning)
- [Instruction Tuning](/generative-ai/large-language-models/foundations-and-training/theories/2.3-instruction-tuning)
- [PEFT: LoRA & Adapters](/generative-ai/large-language-models/foundations-and-training/theories/2.4-peft-lora-adapters)
- [Quantization & Distillation](/generative-ai/large-language-models/foundations-and-training/theories/2.5-quantization-and-distillation)
#### Prompting & Reasoning
- [Prompt Engineering Fundamentals](/generative-ai/large-language-models/application-and-reasoning/theories/2.1-foundations-of-prompt-engineering)
- [Chain of Thought (CoT)](/generative-ai/large-language-models/application-and-reasoning/theories/2.2-chain-of-thought-cot)
- [Self-Consistency Decoding](/generative-ai/large-language-models/application-and-reasoning/theories/2.3-self-consistency-decoding)
- [Tree of Thoughts (ToT)](/generative-ai/large-language-models/application-and-reasoning/theories/2.4-tree-of-thoughts-tot)
- [Multimodal Prompting](/generative-ai/large-language-models/application-and-reasoning/theories/2.6-multimodal-prompting)
#### Retrieval-Augmented Generation (RAG)
- [RAG Architecture Overview](/generative-ai/large-language-models/application-and-reasoning/theories/3.1-understand-the-core-rag-architecture)
- [Embedding Models for RAG](/generative-ai/large-language-models/application-and-reasoning/theories/3.2-embedding-models-for-rag)
- [Vector Databases: FAISS, Pinecone, Weaviate](/generative-ai/large-language-models/application-and-reasoning/theories/3.3-vector-databases-and-indexing)
- [LangChain Pipelines for RAG](/generative-ai/large-language-models/application-and-reasoning/theories/3.8-frameworks-langchain-llamaindex-custom-pipelines)
#### Agents & Autonomy
- [ReAct: Reasoning + Acting](/generative-ai/large-language-models/agents-and-autonomy/theories/1.1-agentic-paradigm-shift)
- [AutoGPT & BabyAGI](/generative-ai/large-language-models/agents-and-autonomy/theories/1.2-from-react-to-autogpt)
- [Agentic Architectures & Modular Reasoning](/generative-ai/large-language-models/agents-and-autonomy/theories/1.3-agentic-architectures-modular-reasoning)
- [Memory, Planning & Control (MCP)](/generative-ai/large-language-models/agents-and-autonomy/theories/2.3-control-systems-self-correction-feedback)
#### Tooling & Frameworks
- [LangChain](/generative-ai/large-language-models/tools/langchain)
- [LlamaIndex](/generative-ai/large-language-models/tools/llamaindex)
- [HuggingFace Transformers](/generative-ai/large-language-models/tools/huggingface)
- [OpenAI APIs](/generative-ai/large-language-models/tools/openai-api)
- [Model Serving (Triton, VLLM, TGI)](/generative-ai/large-language-models/tools/serving)
#### Advanced Architectures & Model Families
- [GPT Family (GPT-2, 3.5, 4, 4o)](/generative-ai/llm-models/theories/1.1-gpt-lineage)
- [BERT Family (BERT, RoBERTa, DistilBERT)](/generative-ai/llm-models/theories/2.1-bert-and-variants)
- [Long-Context Models (Claude, Gemini, Mistral)](/generative-ai/llm-models/theories/4.1-context-extension-strategies)
- [Open-Source LLMs (LLaMA, Mistral, Mixtral)](/generative-ai/llm-models/theories/6.1-llama-mistral-falcon-families)
## ๐งพ Review Tools
### Spaced Repetition
- [Flashcards: Anki Setup](/core-skills/review/tools/anki)
- [Physical Index Card System](/core-skills/review/tools/index-cards)
### Weekly Review
- [How to Create Weekly Review Sheets](/core-skills/review/routines/weekly-review)
### Visual Sketches
- [Decision Tree Splits](/core-skills/review/visuals/decision-trees)
- [PCA 2D Projection](/core-skills/review/visuals/pca-projection)
- [CNN Pipeline Diagram](/core-skills/review/visuals/cnn-pipeline)
### Summary Sheets
- [Formula Sheet: Bayes Theorem](/core-skills/review/summary/bayes-formula)
- [Formula Sheet: Confidence Intervals](/core-skills/review/summary/confidence-interval)
- [Formula Sheet: Gradients & Optimization](/core-skills/review/summary/gradients)
- [Formula Sheet: PCA Eigenvectors](/core-skills/review/summary/pca-eigenvectors)
- [Formula Sheet: Loss Functions](/core-skills/review/summary/loss-functions)
๐ก Tip: Bookmark this mindmap and use it before interviews as a rapid revision tool.
๐๏ธ Mindmap Updates
- 2025-10-31: All links upgraded to canonical
/theories/structure, added missing subtopics. - 2025-07-10: Added SQL + Analytics and Review Tools branches.
- 2025-07-05: Integrated GenAI Agents & RAG subtopics.
- 2025-07-01: Initial roadmap release covering Math โ DL topics.
โFAQ
Q: Can I download or export this roadmap?
Not currently. For offline use, take a screenshot.
Q: Can I contribute?
Yes! Please drop a mail to me.