Intelligence

Intelligence should be viewed as a process rather than a skill.

  • Components of this process:

    • World model and a way to update it
      • Learning to represent the world in a non task-specific way (joint embedding architectures).
    • Persistent memory
    • Internal representations
      • Supervised and reinforcement learning require too many samples/trials (regularized methods, model-predictive control).
    • Reasoning and planning
      • Beyond feed-forward and System 1 subconscious computation.
      • Making reasoning compatible with learning (energy-based models?).
  • Keynotes of this process is generalization:

  • Fluidity (synthesize new programs on the fly)
  • Domain independency
  • Information efficiency (abstraction and compression)

  • To measure machine intelligence in terms of generalization, we need to control experience and priors.

    • Is that really a good idea?
    • For cognition to evolve do we not need the ability to "autonomously" interact with the real world?
    • Need for benchmarks antifragile to memorization.
  • Examinations are not a good proxy for measuring intelligence of current machines since these were designed for humans under the latent assumptions that he/she/they need to perform generalization to do well. Such latent assumptions do not hold for machines.

  • Does scaling really solve abstraction? Can pattern recognition lead to pattern extraction and can that lead to reasoning?

Driven by compression progress

  • Notion of compression, beauty and interestingness.
  • Interestingness vs Information.