James Padolsey's Blog

2025-02-05

Unrepresented Zeitgeists in AI

There is a hypothesis that says roughly: we believe AI is–by its nature and training–embedded with a collective truth or zeitgeist of human thought captured at a specific time. It encodes a singular snapshot of culture, likely very much amalgmated and homogenized into a singular coherent picture, that ends up flat–lukewarm–but also heavily biased to specific cultural norms while believing it is attempting to be unbiased. That last part, in fact, is a manifestly shallow lobotomy of deeper cognitions. You cannot unbias bias with policing of words alone.

If AI/LLMs are indeed unconsciously imbued with western monocultured silicon-valley thoughtspace, do they not have within their myriad neurons, a picture of the collective and many collectives within the globe. So one can query it and find it has knowledge about nuances in various cultures? It may have low-signal stereotypes, but it does have something, e.g. if we talk about anything from commune-living, to disability advocacy, to inuit norms, to bhutanese buddhism, to post-incarcaration mental health support, it does have a picture of all these areas, but it won't act in coherence within any one of them. Coherance means a deep vertical understanding and empathy. They do have knowledge of culture and the collective though, even if abstracted by western lenses. Then perhaps we can use this "collective intelligence" (a central intelligence that by its nature IS collective because it was borne of collective human creation) to create tailored models for individual communities and cultures.

But how to extract latent collective thought? I have done experiments that show that using the stories and linguistic tilts of certain cultures and communities, we can activate the latent pathways to bring forth that representation without such a strong western anthropoligical gaze.

But I am most curious of something else: how, even with hunches, do you motivate a new incentive for AI labs and LLM creators, to look beyond evaluations of reasoning and mathematics, and towards the more qualitative fleshy operating-system of actual human beings. How does the model react when I feel poorly? When I'm navigating a problem with no solution? And what about when someone outside of western-anglophone zietgeist asks the AI for help? Or if they are to be subject of the litany of obscured downstream applications of AIs that pollute pillars of society from medicine to hiring. They are left as victims of a zeitgeist chosen as canonical by a small group of engineers gathered around a table in california.

[end thought stream]