Developing AI Responsibly.

Singaporeans identify fears over misinformation, cybersecurity, and worsening data privacy as the potential outcomes of increased AI uptake that concerned them the most. Addressing these challenges will be vital in order to maintain public buy-in to the AI transformation.

1.2.3.4.5.6.7.8.

To increase confidence in AI tools, we need to ensure they are rolled out in a responsible way.

Like any powerful technology, AI will change the way that people will live and work – and navigating the transitions it creates will need careful management. 89% of people in Singapore believe that AI needs to be rolled out responsibly.

0 %

agree that private individuals should have control of their own appearance, face or voice.

0 %

agree that we should ensure there are protections for content creators to ensure they’re not harmed by Al.

0 %

agree that there should be controls on the use of AI to ensure it is not used in a misleading way.

Singaporeans are mostly concerned about increasing unemployment, increasing the amount of misinformation and deception on the internet, and the loss of key life skills among the population.

Google’s responsible AI approach.

Google’s approach to AI governance is guided by its AI Principles of bold innovation, responsible development and deployment, and collaborative progress, to ensure that people, businesses, and governments around the world can benefit from AI’s potential while mitigating its potential risks.

Based on its recent Responsible AI Progress Report, the company employs a full-stack governance approach across the AI lifecycle, from design to testing to deployment to iteration, comprising:

  1. Governance: Google’s governance is guided by its AI Principles, as well as various frameworks and policies like the Secure AI Framework and Frontier Safety Framework. They employ pre- and post-launch processes with leadership reviews to ensure alignment and regularly publish model cards and technical reports for transparency.
  2. Mapping: Google takes a scientific approach to mapping AI risks through research and expert consultation, publishing over 300 research papers on responsible AI and safety topics and codifying this into a risk taxonomy.
  3. Measuring: Google employs a rigorous approach to measuring AI performance with a focus on safety, privacy, and security benchmarks. Multi-layered red teaming, involving both internal and external teams, proactively tests AI systems. Model and application evaluations are conducted pre- and post-launch to assess alignment with policies before and after launch.
  4. Managing: Google deploys and evolves mitigations for content safety (filters, instructions, safety tuning), security (the Secure AI Framework), and privacy. Google also works to advance user understanding through provenance technology (like SynthID, which has been open-sourced for any developer to apply) and AI literacy education. They also support the broader ecosystem with research funding, tools, and by promoting industry collaboration.

AI should be deployed to pre-empt cybersecurity threats.

Cybercrime and fraud is a major concern to people in Singapore, with Singaporeans some of the largest per capita victims of cybercrime in the world.9 Misuse of AI by hostile actors could create new attack vectors, and developers of foundation models need to do all they can to counter these use cases.

AI tools could also help shift the balance between offence and defence in favour of the latter.

At the moment, most cybersecurity solutions rely on scanning for known threats, whereas AI driven tools can more proactively monitor for hostile software, on both a technical level and through new types of social engineering such as phishing.

By 2035, we estimate that the combination of more effective prevention and faster response times from AI driven solutions could help prevent over

0 %

of the costs from cybersecurity threats and fraud.

Transparently.ai

is using AI to improve trust in financial reporting.

Trust in financial markets is underpinned by the knowledge that those engaging with companies and markets – as customers or as potential investors – have all of the information they need before making a decision. In many cases, this means being certain that the company they are dealing with has sound financial accounts. Forensic accounting is a core part of this expected transparency, but the process is costly, and it can take a team of forensic accountants months to pore over a major company’s financial accounts.

Transparently.ai is a Singapore-based company that uses Generative AI to speed up and reduce the costs associated with this process. With access to the right information, the company’s AI tool can generate a financial report that might otherwise have taken two or three weeks to produce in a matter of seconds, and for a fraction of the cost.

The impact of this cost and time saving cannot be overstated. Whereas previously an investor or institution with a large portfolio might take months to review all of their investments, AI-powered tools like Transparently allow investors to have greater levels of trust and certainty in all of the companies in their portfolio in a far shorter timespan.

Transparently.ai developed their tool with the support of Google through the Google for Startups Accelerator: AI First Singapore. The accelerator brings the best of Google to high potential tech startups using artificial intelligence, machine learning and cloud technology to tackle some of the most urgent challenges around the globe.

How Google’s AI: First Accelerator Supports Growth

  • Each cohort of startups comes together to tackle technical challenges that can help grow their businesses through a mix of remote and in-person, 1-to-1, group learning sessions, and sprint projects.
  • Founders outline the top technical challenges for their startup, and are paired with experts from Google and the industry to solve those challenges and grow their business.
  • Accelerators include deep dives and workshops focused on software engineering, product design, customer acquisition, and leadership development for founders.
  • Eligible participants receive Google Cloud credits, dedicated support from startup experts, technical training and access to key industry events. Additionally, participants are eligible to receive 30 days of free Cloud TPU access through the TPU Research Cloud program to accelerate their open-source machine learning research.

Transparently.ai had been using Google Cloud infrastructure to support its tools prior to participating in the accelerator, and used the programme to develop a Gemini-powered agent that could serve as a more intuitive and interactive way for their customers to leverage Transparently.ai’s core tools. Alongside the benefits the company accrued by participating in workshops and mentoring programmes, it was the ability to directly access Google’s engineering team that provided the greatest impact to Transparently.ai’s GenAI project.

Going forward, Transparently.ai aims to continue developing its AI tools to further strengthen its customer experience, and the effectiveness of the forensic accounting analysis it produces.

“Unfortunately, engineering doesn’t work [with a strict structure]. You run into problems and you need to ask someone a question right away. Having access to the engineering team and being able just to pick up the phone or text them directly and say: ‘Okay, I’m having this problem. How do I solve this?’, and then having them come back to me and adding value to that, adding stuff that I didn’t know, was amazing.”

Mauro Sauco

Co-Founder and CTO at Transparently.ai

Source: Interview with Transparently.ai