Data Analysis

  1. Learning = Representation + Evaluation + Optimization.
  2. Generalisation is necessary.
  3. Data alone ain’t enough.
  4. Overfitting will attack you from unpredictable places.
  5. Intuition fails in high dimensions.
  6. Theoretical guarantees are wobbly.
  7. Feature engineering for the win.
  8. More data beats better algos.
  9. Learn many models.
  10. Simplicity doesn’t imply accuracy (always).
  11. Use Occam’s razor with care.
  12. Representable doesn’t mean learnable (always).
  13. Correlation doesn’t imply causation.

Universal Approximation

Perceptrons are the simplest form of neural networks. MLPs or multi-layer perceptrons are called universal approximators.

The Curse of Dimensionality

The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience.

In case of machine learning, the number of learning samples increases exponentially with increasing dimensions.


Graph functions and node functions are permutation independent. Graph Theorists argue that GNNs are just special cases of graph isomorphism test(s) like WL test.