Developing AI Responsibly.

While the AI opportunity is immense for Indonesia, the challenges that come with it must be addressed in order to boost the country’s adoption of AI tools. In order to maintain trust in AI and offset new challenges it creates, we will need to do more to help people adapt to new career paths, ensure adoption is widely shared and protect against new potential security risks.

1.2.3.4.

While AI is going to have significant potential, it will also create real challenges.

Like any powerful technology, AI has the capacity to be used for bad ends as well as good ones — and navigating the transitions it creates will need careful management. In our survey, 88% of people in Indonesia said they believed that AI needs to be rolled out responsibly.

In this section, we will explore the specific challenges AI could create, and how these challenges can be tackled. Among the challenges AI could create are:

Exacerbating inequalities.

While AI has the potential to be a hugely democratising technology accessible to everyone, this might not happen by default —and significant work will be required to ensure that its benefits can support every part of society.

Magnifying skills gaps.

While the potential for AI to lead to unemployment in the short term is often exaggerated, it is likely to change how some current occupations and sectors work, requiring retraining and upskilling.

Introducing new security risks.

AI could also make it easier to create new cybersecurity threats or misinformation, while badly designed systems with low transparency could have unintended consequences.

Indonesians are looking for greater confidence in AI tools.

0 %

agreed that there should be controls on the use of AI to ensure it is not used in a misleading way.

0 %

agreed that private individuals should have control of their own appearance, face or voice.

0 %

agreed that we should ensure there are protections for content creators to ensure they’re not harmed by Al.

How Google is developing AI responsibly.

Google’s approach to AI governance is guided by its AI Principles of bold innovation, responsible development and deployment, and collaborative progress, to ensure that people, businesses, and governments around the world can benefit from AI’s potential while mitigating its potential risks.

Based on its recent Responsible AI Progress Report, the company employs a full-stack governance approach across the AI lifecycle, from design to testing to deployment to iteration, comprising:

  1. Govern: Google’s governance is guided by its AI Principles, as well as various frameworks and policies like the Secure AI Framework and Frontier Safety Framework. They employ pre- and post-launch processes with leadership reviews to ensure alignment and regularly publish model cards and technical reports for transparency.
  2. Map: Google takes a scientific approach to mapping AI risks through research and expert consultation, publishing over 300 research papers on responsible AI and safety topics and codifying this into a risk taxonomy.
  3. Measure: Google employs a rigorous approach to measuring AI performance with a focus on safety, privacy, and security benchmarks. Multi-layered red teaming, involving both internal and external teams, proactively tests AI systems. Model and application evaluations are conducted pre- and post-launch to assess alignment with policies before and after launch.
  4. Manage: Google deploys and evolves mitigations for content safety (filters, instructions, safety tuning), security (the Secure AI Framework), and privacy. Google also works to advance user understanding through provenance technology (like SynthID, which has been open-sourced for any developer to apply) and AI literacy education. They also support the broader ecosystem with research funding, tools, and by promoting industry collaboration.

There are emerging adoption gaps, including a rural-urban divide.

So far, the diffusion of AI has largely been powered by ‘bottom up’ adoption dynamics, with employees choosing to use the technology themselves. 65% of current AI users said they had largely chosen to use AI tools at work themselves, compared to just 21% who said they had been encouraged to use AI tools by their company leadership.

In our polling, we found three significant divides:

1

Age.

Under 25s were twice as likely as those over 45 to be daily users of AI tools.

2

Education.

Non graduates were a third less likely to be a daily user than graduates.

3

Rural vs urban.

Those living in a small town, village or rural area were a third less likely to be a daily user.

Closing these gaps will require a concerted effort from the government, education system, tech companies, and Indonesian businesses themselves to support skills training and development among parts of the population that are currently lagging behind. This means replicating the success of specialised courses such as Grow with Google across larger portions of the population.

Building on its success training over 2 million Indonesians through Grow with Google, Google has intensified its AI training efforts in Indonesia over the past two years. These comprehensive programs (up to 900 hours) cater to diverse groups, from students to start-ups, offering flexible learning formats for all skill levels, focused on real-world impact.5

What is stopping Indonesians
from using AI?

We asked Indonesians to say in their own words what the main barriers were stopping them from using AI more. We received a variety of concerns back: from just not wanting to become too dependent on the technology to worries about reliability. To address these concerns will require a combination of both increasing awareness about what AI can do - and continued research into addressing real worries over safety or reliability.

Responses are edited for grammar and spelling, but otherwise unchanged.

Workers in clerical or admin roles are most likely to need help with career transitions.

While today’s AI models are increasingly powerful, there are still many tasks that they can’t do as well as a human. That means that for the majority of workers they act as complements, rather than substitutes. While they can help with individual tasks, they are unlikely to take over a whole occupation and lead to entire jobs being substituted. In our modelling, we estimate that less than 5% of workers in Indonesia have jobs in occupations that are at risk of substitution from AI.

Those workers, largely focussed in highly clerical or administrative tasks, are those who are most to benefit from support with upskilling and retraining. However, even here, any displacement of old roles is likely to be more than offset by growing demands for services and workers with similar aptitudes and skillsets across the economy, suggesting there is unlikely to be any medium term increase in unemployment.

AI should be deployed to pre-empt cybersecurity threats.

Cybercrime and fraud are one of the biggest concerns of people in Indonesia, with misinformation the highest ranking concerns about the increasing use of AI from our polling. Misuse of AI by hostile actors could create new attack vectors, and developers of foundation models need to do all they can to counter these use cases.

That being said, however, AI tools could help shift the balance between offence and defence in favour of the latter. At the moment, most cybersecurity solutions rely on scanning for known threats, whereas AI-driven tools can more proactively monitor for hostile software and software vulnerabilities on a technical level, or new types of social engineering such as phishing.

By 2035, we estimate that over half of the costs from cybersecurity threats and fraud could be prevented by the combination of more effective prevention and faster response times from AI driven solutions.

Building careers with AI

Adhi Setiawan’s Journey with the Bangkit Academy.

Early adopters of AI in Indonesia are taking advantage of the career opportunities of the machine learning revolution. Bangkit Academy, led by Google, offers bespoke training initiatives which produce graduates with in-demand skills, allowing young Indonesians to build meaningful careers in AI.

One such young person is Adhi Setiawan, an AI engineer at PT Kalbe Farma. When Adhi finished his undergraduate degree in IT, he lacked the confidence to embark on a career in his desired field of IT. As a wheelchair user with significant mobility issues, Adhi explains that he felt like a burden to others, and frequently found himself isolated.

In February 2021, Adhi discovered the machine learning course at Bangkit Academy, which offers specialist training in readying high-caliber technical talent for careers in the world-class technology companies Indonesia has to offer. Having been praised by the Ministry of Education and the then-President Joko Widodo, Bangkit was the perfect place for Adhi to be among the thousands of students taking their skills to the next level.

“The Bangkit program provides quality learning materials that can be implemented practically to hone skills. However, patience, persistence, and never giving up are needed to finally master a field.”

Adhi Setiawan

Graduate of the Google-led Bangkit Academy

Adhi studied for over 900 hours at the Bangkit Academy, building a detailed understanding of machine learning on top of his knowledge of IT and coding, while also improving his English. This has since enabled him to:

  • Write a full thesis as part of a Capstone Research Project
  • Build a network of like-minded and knowledgeable students and workers across the AI industry
  • Begin his career as an AI engineer.

Now, through working on the frontier of AI innovation, Adhi is seeking to revolutionise healthcare and transportation systems across Indonesia. He has gone, in his own words, from lacking confidence to finding his courage. This began with a Google-led AI initiative, and Adhi’s commitment and curiosity when it comes to adopting and exploring AI.