Adrian Zidaritz
1 min readMar 4, 2024

--

Same problem in the US as in the UK. I would even say that the problem here is more acute; Silicon Valley and Washington do not cooperate most of the time. This will have to change.

Bias is very difficult to eliminate from AI systems because they are based on statistical learning from very large datasets. Curating those datasets for bias would be very laborious. More promising is the idea of synthetically generating data without bias in it and then refining the AI systems to account for it. But synthetic data has other issues.

Regarding learning, it is true that we do not yet have an understanding of how neural networks learn. Moreover, when humans learn, we accumulate knowledge about the real world. If you ask me what is the distance between San Francisco and London measured on a geodesic, I "know" how to look up that information. Large language models do not know about reality itself; they only grasp the patterns with which humans express their thoughts about reality in language.

But this is already changing. AI systems can be combined with deductive systems, systems that do reason based on mathematical logic. They are able now to solve math problems, and they would be also able to answer the question I posed above by appealing to other systems plugged-in underneath them that would do the calculation, for example Wolfram Mathematica.

So the way I think about AI and LLMs in particular is as interfaces to knowledge, perhaps the best interfaces to knowledge. But not knowledge itself.

--

--

Adrian Zidaritz

Data scientist, PhD Mathematics - perspectives on artificial intelligence and its implications for democracy, human values, and your own well-being.