SINGAPORE: By now, it is clear that artificial intelligence (AI) is not going anywhere. In fact, judging from Prime Minister Lawrence Wong’s recent National Day Rally speech, it will only become more entrenched in how Singapore works, learns and lives.
While he acknowledged concerns that have been raised over the use of AI by students, such as over-reliance, loss of critical thinking skills and the temptation to take shortcuts, Mr Wong also urged educators and parents to remain open to its potential.
The challenge, he noted, is to strike the right balance: empowering young people to fully exploit the benefits of technology while protecting them from potential harms.
But what does that look like in practice, especially for parents?
A recent CNA Talking Point survey of 500 students found that 84 per cent of those in secondary school already use AI for their homework at least weekly.
You will not find an official survey of students under 13, because of ChatGPT’s age restrictions. However, I know many parents who let their primary school children use ChatGPT because it has become so ubiquitous – ChatGPT has amassed over 800 million weekly users in less than three years.
Yet, the same parents are worried about generative AI’s ability to stimulate or stifle learning.
A few months ago, I organised a Gen AI webinar for 100 parents, and their questions were revealing: “How do we nurture our children to use generative AI responsibly?”; “How do I help children discern true images from fake images?”; “How do I keep my kids safe from AI while also taking advantage of AI for their studies?”
I have also been conducting Gen AI workshops for secondary school teachers. They are equally worried and struggle to keep pace with Gen AI’s advancements. For example, the latest version of Google Gemini allows users to create instant web apps and webpages with just a few prompts, fuelling a new trend called “vibe coding” among non-STEM folks.
As an educator, parent and Gen AI coach, I believe that we adults play a critical role in guiding our children even as we go through a massive technological shift. We must arm ourselves and our children with three things: AI literacy, a clear understanding of the learning process and deep human values.
I am sure parents will groan when I say that we must all become AI-literate. It is already a chore for us to figure out today’s school subjects, and now we have another subject to “learn”.
The good news is that there is no thick AI textbook to buy, you just need to start using Gen AI in your daily work to become AI-literate.
Try using Gen AI apps like ChatGPT or Google Gemini for everyday tasks - checking grammar, generating images for presentations, doing online research, or analysing data - to get a feel for the technology’s strengths and flaws. Such constant practice will also let you discover how Gen AI can “hallucinate” (produce wrong information) and be unable to do tasks it was not trained on.
Try this fun exercise with your children: generate an image of an analogue clock in ChatGPT or any Gen AI image generator. You will discover the clock hands are often stuck at 10:10 no matter what time you ask the AI to generate. This is because most of the clock face images that Gen AI was trained on were watch ads set at 10:10 for the brand logo to be clearly seen.
Once you become familiar with the mind of the machine, you’ll be better prepared to guide your children.
In the first two years of ChatGPT, many tools were launched to detect the use of Gen AI in student assignments. There were also news stories about disputes between teachers and students on whether Gen AI was used in assignments.
Today, many educators have learned that no AI-detection tool can guarantee 100 per cent accuracy.
In the CNA story, National University of Singapore (NUS) lecturer Jonathan Sim demonstrated how the same piece of work was flagged as 74 per cent AI-generated by one tool, and fully human-created by another.
If you ask me, since many people are already using Gen AI, it is more constructive to learn how people learn, and determine when to keep technology out of the learning process.
For example, in primary school, the learning of multiplication tables still requires rote memorisation, or we will not be able to do mental sums as adults. Surely, we should not rely on ChatGPT to tell us the answer to “50 x 50”?
In the area of essay writing, one can get started with AI-generated points, but the essay must be written manually so the brain is put to work in synthesising ideas and concepts. Gen AI can be applied later to evaluate the essay and check for typos.
This probably means a return to pen and paper scenarios … and many aching hands.
At Nanyang Technological University (NTU) where I teach, I have begun shifting towards more oral assessments instead of traditional written assessments. During their oral presentations, undergraduates have to articulate their thought process and be ready to answer my questions. It is very time-consuming but this is a time-tested method of teaching and learning, handed down from Greek philosopher Socrates over 2000 years ago.
Do talk to your child’s teacher about their AI usage policy and learn how AI can help or hinder in specific subjects.
The toughest but most vital thing for parents to do is to help their children discern the right and wrong uses of Gen AI, especially since we are being drowned by a deluge of AI-generated content.
How does one do this, now that deepfakes and misinformation are so difficult to distinguish from reality?
My view is that we need to double down on instilling values such as mindfulness and integrity.
As WIRED co-founder and author Kevin Kelly wrote: “It’s hard to cheat an honest person.”
The mindful person with high integrity will pay close attention to what he or she sees, and not take everything at face value.
We need to teach children how to verify information sources, to remember that the machine has no morals, and know the consequences of not actually learning anything in school.
Underlying all these actions is the desire to retain our human agency. The free availability and easy access of Gen AI apps makes it tempting to outsource all our thinking and decision-making. But if we do, we risk losing ourselves to AI algorithms.
How can we let that happen to our children, and to ourselves?
Ian Yong Hoe Tan is a strategic communication lecturer at the Wee Kim Wee School of Communication and Information, Nanyang Technological University. He has more than two decades of experience working in the media and technology industries. He also coaches organisations in Gen AI upskilling.
Continue reading...
While he acknowledged concerns that have been raised over the use of AI by students, such as over-reliance, loss of critical thinking skills and the temptation to take shortcuts, Mr Wong also urged educators and parents to remain open to its potential.
The challenge, he noted, is to strike the right balance: empowering young people to fully exploit the benefits of technology while protecting them from potential harms.
But what does that look like in practice, especially for parents?
A recent CNA Talking Point survey of 500 students found that 84 per cent of those in secondary school already use AI for their homework at least weekly.
You will not find an official survey of students under 13, because of ChatGPT’s age restrictions. However, I know many parents who let their primary school children use ChatGPT because it has become so ubiquitous – ChatGPT has amassed over 800 million weekly users in less than three years.
Yet, the same parents are worried about generative AI’s ability to stimulate or stifle learning.
A few months ago, I organised a Gen AI webinar for 100 parents, and their questions were revealing: “How do we nurture our children to use generative AI responsibly?”; “How do I help children discern true images from fake images?”; “How do I keep my kids safe from AI while also taking advantage of AI for their studies?”
I have also been conducting Gen AI workshops for secondary school teachers. They are equally worried and struggle to keep pace with Gen AI’s advancements. For example, the latest version of Google Gemini allows users to create instant web apps and webpages with just a few prompts, fuelling a new trend called “vibe coding” among non-STEM folks.
As an educator, parent and Gen AI coach, I believe that we adults play a critical role in guiding our children even as we go through a massive technological shift. We must arm ourselves and our children with three things: AI literacy, a clear understanding of the learning process and deep human values.
PARENTS MUST BECOME AI-LITERATE
I am sure parents will groan when I say that we must all become AI-literate. It is already a chore for us to figure out today’s school subjects, and now we have another subject to “learn”.
The good news is that there is no thick AI textbook to buy, you just need to start using Gen AI in your daily work to become AI-literate.
Try using Gen AI apps like ChatGPT or Google Gemini for everyday tasks - checking grammar, generating images for presentations, doing online research, or analysing data - to get a feel for the technology’s strengths and flaws. Such constant practice will also let you discover how Gen AI can “hallucinate” (produce wrong information) and be unable to do tasks it was not trained on.
Try this fun exercise with your children: generate an image of an analogue clock in ChatGPT or any Gen AI image generator. You will discover the clock hands are often stuck at 10:10 no matter what time you ask the AI to generate. This is because most of the clock face images that Gen AI was trained on were watch ads set at 10:10 for the brand logo to be clearly seen.
Once you become familiar with the mind of the machine, you’ll be better prepared to guide your children.
Related:


UNDERSTANDING THE LEARNING PROCESS
In the first two years of ChatGPT, many tools were launched to detect the use of Gen AI in student assignments. There were also news stories about disputes between teachers and students on whether Gen AI was used in assignments.
Today, many educators have learned that no AI-detection tool can guarantee 100 per cent accuracy.
In the CNA story, National University of Singapore (NUS) lecturer Jonathan Sim demonstrated how the same piece of work was flagged as 74 per cent AI-generated by one tool, and fully human-created by another.
If you ask me, since many people are already using Gen AI, it is more constructive to learn how people learn, and determine when to keep technology out of the learning process.
For example, in primary school, the learning of multiplication tables still requires rote memorisation, or we will not be able to do mental sums as adults. Surely, we should not rely on ChatGPT to tell us the answer to “50 x 50”?
In the area of essay writing, one can get started with AI-generated points, but the essay must be written manually so the brain is put to work in synthesising ideas and concepts. Gen AI can be applied later to evaluate the essay and check for typos.
This probably means a return to pen and paper scenarios … and many aching hands.
At Nanyang Technological University (NTU) where I teach, I have begun shifting towards more oral assessments instead of traditional written assessments. During their oral presentations, undergraduates have to articulate their thought process and be ready to answer my questions. It is very time-consuming but this is a time-tested method of teaching and learning, handed down from Greek philosopher Socrates over 2000 years ago.
Do talk to your child’s teacher about their AI usage policy and learn how AI can help or hinder in specific subjects.
Related:


CULTIVATING HUMAN VALUES
The toughest but most vital thing for parents to do is to help their children discern the right and wrong uses of Gen AI, especially since we are being drowned by a deluge of AI-generated content.
How does one do this, now that deepfakes and misinformation are so difficult to distinguish from reality?
My view is that we need to double down on instilling values such as mindfulness and integrity.
As WIRED co-founder and author Kevin Kelly wrote: “It’s hard to cheat an honest person.”
The mindful person with high integrity will pay close attention to what he or she sees, and not take everything at face value.
We need to teach children how to verify information sources, to remember that the machine has no morals, and know the consequences of not actually learning anything in school.
Underlying all these actions is the desire to retain our human agency. The free availability and easy access of Gen AI apps makes it tempting to outsource all our thinking and decision-making. But if we do, we risk losing ourselves to AI algorithms.
How can we let that happen to our children, and to ourselves?
Ian Yong Hoe Tan is a strategic communication lecturer at the Wee Kim Wee School of Communication and Information, Nanyang Technological University. He has more than two decades of experience working in the media and technology industries. He also coaches organisations in Gen AI upskilling.
Continue reading...