the Age of ……

it was the Industrial Age, followed by the Information Age, or Digital Age; or Knowledge Age (Bereiter, 2002). and 2.5 decade into the 21st century, the next Age perhaps has arrived — the Age of Pretending.

just saw Richard Stallman gave this lecture at Georgia Tech some two weeks ago, and he coined the term “PI”, no, not private investigator, but Pretend Intelligence:

“…nowadays, people often use the term artificial intelligence for things that aren’t intelligent at all because they don’t understand anything and they don’t know anything… promoted the most for large language models, generators as I call them, because they don’t know anything. They generate text and they don’t
understand really what that text means…

Every time you call them AI, you are endorsing the claim that they are intelligent and they’re not. So let’s let’s refuse to do that. So I’ve come up with the term Pretend Intelligence. We could call it PI. ” (12:25-12:49)

calling it PI is great, cos very catchy, and cos some singaporeans prided themselves that they like abbreviations a lot (and often these many assumed initialisation is the same as acronym).

was discussing with my new younger friend vera 清雅 (清秀、雅丽, a beautiful name), who happened to share with me an article on the phenomenon of influencers. and influencers essentially banked on “percevied expertise, trustworthiness, and attractiveness” (Duckwitz & Strasser, 2025, p.2). the keyword here is — percevied. and what’s an (best) act that can influence pple? yes no prize for guessing it, wayang (where one defines wayang as involving some act of pretending, intelligence or otherwise).

it’ll be interesting to observe where all these are going some years down the road, how AI or PI is going to advance humanity, in the domain of human intelligence, pretending, or wayang.

small year-end observation of GenAI/LLM/transformer

gotten my own evidence of how far GenAI, based on the Transformer model, is going (or is going nowhere) yesterday while finalising my last piece of homework for 2025. GenAI, based on the Transformer model, works fundamentally by predicting what word(s) come next. and what made this ‘prediction’ possible? the dataset used in the training that the models have gone through informs this. in short, the Transformer, while ‘creative’, is creating based on existing patterns derived from dataset. and who created this dataset(s)? human thinking, thoughts, ideas, formed into words in the pre-GenAI era. and that dataset has long runout by now. you may read this article by de Gregorio to see all the ideas i have mentioned fall together.

long story short, whatever LLM provides you, it’s something that existed out there in its mega training dataset.

so, now back to my observation. this is the statement i wrote/created:
“With the advent of generative artificial intelligence (GenAI), cyber actors have harnessed it for autonomising complex hacking activities”

after feeding the statement into PAIR (powered by claude), platform suggested:

“autonomising” –> “automate“ (clearer expression)

what’s clear, what’s not clear is subjective. but, “clearer” here is a conclusion of the algorithms based on the dataset. and why is “autonomising” less ‘clear’? by design, ‘clearness’ has to be interpreted based on its training dataset. begs another question, autonomising vs. automating/automate, which term is likely to appear more often in the dataset, and thus lends to the prediction of ‘clearness’? from my author’s point-of-view, PAIR’s suggestion is definitely not ‘clearer’ in representing what i intended for my readers. and, ‘autonomising’ is likely a relatively rare concept out there at the moment. to me, in this case, LLMs’ greatest limitation of being bounded by its dataset is somewhat revealed. asking a far stretch question, is the current conception of LLM/transformer going to lead to AGI? i think the answer is clear.