Wide-ranging overview of international issues, any of which could become a crisis in 2015. Topics range from a naval incident in China to continued fall in oil prices that could destabilize Russia, Venezuela, and other countries.
we’ve realised that artificial intelligences (AIs), particularly a form of machine learning called neural networks, which learn from data without having to be fed explicit instructions, are themselves fallible.
The second is that humans turn out to be deeply uncomfortable with theory-free science.
there may still be plenty of theory of the traditional kind – that is, graspable by humans – that usefully explains much but has yet to be uncovered.
The theories that make sense when you have huge amounts of data look quite different from those that make sense when you have small amounts
The bigger the dataset, the more inconsistencies the AI learns. The end result is not a theory in the traditional sense of a precise claim about how people make decisions, but a set of claims that is subject to certain constraints.
theory-free predictive engines embodied by Facebook or AlphaFold.
“Explainable AI”, which addresses how to bridge the interpretability gap, has become a hot topic. But that gap is only set to widen and we might instead be faced with a trade-off: how much predictability are we willing to give up for interpretability?