<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[(R) AI Newsletter]]></title><description><![CDATA[A weekly newsLetter focused on Responsible AI!]]></description><link>https://archanaatmakuri.substack.com</link><generator>Substack</generator><lastBuildDate>Tue, 07 Apr 2026 00:12:45 GMT</lastBuildDate><atom:link href="https://archanaatmakuri.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Archana]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[archanaatmakuri@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[archanaatmakuri@substack.com]]></itunes:email><itunes:name><![CDATA[Archana Atmakuri]]></itunes:name></itunes:owner><itunes:author><![CDATA[Archana Atmakuri]]></itunes:author><googleplay:owner><![CDATA[archanaatmakuri@substack.com]]></googleplay:owner><googleplay:email><![CDATA[archanaatmakuri@substack.com]]></googleplay:email><googleplay:author><![CDATA[Archana Atmakuri]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[India's AI Mission: Pioneering Growth, Start-ups, and Decolonizing AI]]></title><description><![CDATA[For the users, by the users: India's AI developments / Newsletter #4]]></description><link>https://archanaatmakuri.substack.com/p/indias-ai-mission-growth-start-ups</link><guid isPermaLink="false">https://archanaatmakuri.substack.com/p/indias-ai-mission-growth-start-ups</guid><dc:creator><![CDATA[Archana Atmakuri]]></dc:creator><pubDate>Wed, 17 Jul 2024 06:01:18 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6e571b63-7480-49fd-8105-99b263454aed_438x438.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#128075; Hi there, Archana here! Welcome to the <strong>4th</strong> <strong>Edition </strong>of this newsletter on <strong>Responsible AI</strong>. </p><p>This week, I&#8217;m decoding some of the interesting discussion pointers that have come out of the Global India AI Summit held on 3-4 July. The Indian Government pledged USD 1.25 Billion towards the India AI Mission. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!B9hS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6acb44b6-87fb-465d-a9ba-8cf4175a41d2_800x2000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!B9hS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6acb44b6-87fb-465d-a9ba-8cf4175a41d2_800x2000.png 424w, https://substackcdn.com/image/fetch/$s_!B9hS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6acb44b6-87fb-465d-a9ba-8cf4175a41d2_800x2000.png 848w, https://substackcdn.com/image/fetch/$s_!B9hS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6acb44b6-87fb-465d-a9ba-8cf4175a41d2_800x2000.png 1272w, https://substackcdn.com/image/fetch/$s_!B9hS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6acb44b6-87fb-465d-a9ba-8cf4175a41d2_800x2000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!B9hS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6acb44b6-87fb-465d-a9ba-8cf4175a41d2_800x2000.png" width="800" height="2000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6acb44b6-87fb-465d-a9ba-8cf4175a41d2_800x2000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2000,&quot;width&quot;:800,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:159038,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!B9hS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6acb44b6-87fb-465d-a9ba-8cf4175a41d2_800x2000.png 424w, https://substackcdn.com/image/fetch/$s_!B9hS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6acb44b6-87fb-465d-a9ba-8cf4175a41d2_800x2000.png 848w, https://substackcdn.com/image/fetch/$s_!B9hS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6acb44b6-87fb-465d-a9ba-8cf4175a41d2_800x2000.png 1272w, https://substackcdn.com/image/fetch/$s_!B9hS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6acb44b6-87fb-465d-a9ba-8cf4175a41d2_800x2000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I&#8217;ve doubled down on the following 3 panels as the first two panels set the context for India&#8217;s AI developments.</p><ol><li><p><strong>AI in Public Sector. </strong></p></li><li><p><strong>IndiaAI: Real World AI Solutions</strong></p></li><li><p><strong>Ensuring Safety, Trust, and Governance in the AI Age. </strong></p></li></ol><h4><strong>AI in Public Sector </strong></h4><p>India&#8217;s Mission Karmayogi, the National Programme for Civil Services Capacity Building (NPCSCB), has been envisioned by the Government to enhance AI usage in public sector skills development. Leading this initiative is one of India&#8217;s southern state, Karnataka, which is focusing on AI literacy and skilling, AI for decision makers and policymakers using a top-down approach, and fostering collaboration with research and industry.</p><p>Within the public sector, Senior and mid-level bureaucrats at both central and state levels are undergoing training in new and emerging technologies to understand their use cases. This &#8220;<strong>Training to Ideation&#8221;</strong> approach aims to equip officials with the skills to identify areas where technology can be applied and to measure the impact on policy using data generated by applications. In the health and education sectors, the <strong>&#8220;AI for Digital Transformation&#8221;</strong> initiative is becoming more sector-specific, helping government officials to understand and effectively implement these technologies. </p><p>Eager to integrate AI into their workflows, Indian bureaucrats are exploring how best to do so by first identifying problems and challenges, determining necessary actions, and then incorporating the appropriate technology. This comprehensive approach involves developing functional competency, domain knowledge, and behavioral skills. </p><p>The general sentiment among the experts in this particular panel is that understanding of regulation as imperative as learning about using AI. Among the public servants - the panelists argued- that a small percentage of civil servants need to have a technical deep dive into AI but overall they need to understand AI and must be able to make data-driven decisions. </p><h4><strong>IndiaAI: Real World AI Solutions</strong></h4><p>Independent AI-Mission in one of India&#8217;s state: Telangana. The IT Minister for the State, Jayesh Ranjan, shared about an Artificial Intelligence (AI) driven tool to identify substance abuse among teenagers which under trial predicted about 500+ youth could be helped from potentially fall prey to drug addiction. This is extremely locally, but globally, India is also looking to be a leader in Global South with some of its initiatives such as <a href="https://solve.mit.edu/challenges/heath-in-fragile-contexts-challenge/solutions/75300">E-Sanjeevani</a>.  The idea of this initiative is to leverage AI/ML as a &#8220;comprehensive clinical decision support system to improve the quality of care and features of the e-Sanjeevani telemedicine platform.&#8221; There&#8217;s a lot going on in India&#8217;s healthcare sectors. </p><p>Doing AI for underserved communities in India. A Scientist from Wadhwani.AI shared use cases of AI usage in healthcare at Wadhwani AI which develops and deploys human-centered AI solutions, including pest management for cotton farming, cough sound analysis technology to help identify at-risk COVID patients, treat Tuberculosis and newborn anthropometry that provides early identification and intervention for underweight infants.&nbsp;</p><p>Indian GenAI startup Sarvam AI has announced that it is working with Microsoft to make its Indic voice large language model (LLM) available on Azure. The collaboration reinforces Microsoft&#8217;s commitment to enabling AI-driven growth and innovation in India. </p><p>Another focus on most of panels is that India is building its own LLMs. For instance, <a href="https://www.sarvam.ai/">Sarvam AI</a> is building Gen AI models targeting Indic languages and contexts with an aim to make the development and deployment of generative AI apps in India more accurate and cost effective. The company intends to provide a natural voice-based interface to LLMs, will initially be available in Hindi and other local languages. </p><p>Amidst these exciting developments, the discourse on ethics was certainly not lost.  Karya, an Indian start up, building data sets for the rural communities, by the low income communities enabling rural communities to build and benefit from AI and the work by Karya truly epitomises <em><strong>&#8220;Inclusive AI.&#8221;</strong></em></p><h4><strong>Ensuring Safety, Trust, and Governance in the AI Age. </strong></h4><p>I was looking forward to this particular discussion to understand the Indian perspective. India is diverse culturally, linguistically and socially making the world closely watch the developments to see how the country would develop AI Strategy ethically. Almost all the panelists doubled down on two aspects of AI ethics: <strong>Accountability and Transparency. </strong></p><p>Urvashi Aneja, Founder of the <a href="https://digitalfutureslab.in/">Digital Futures Lab</a>, shared some intriguing frameworks for deploying AI ethically, starting with "Data Labeling." One aspect she highlighted is the need for AI deployers to provide users with information about data labeling, similar to how we see labels in supermarkets. Here are some of her suggestions:</p><ul><li><p>Avoid using AI systems for critical decision-making. If necessary, use simpler AI systems.</p></li><li><p>Conduct adequate testing in real-world environments across various social groups, ages, and demographics, and make the results public.</p></li><li><p>Ensure transparency of companies' AI governance structures within the organization and clarify how decisions are made. Maintain transparency around data sharing between public and private entities.</p></li></ul><p>Almost all the speakers emphasized the importance of an "outcomes-based approach" by establishing a policy agenda around AI for Good. In this approach, society defines the desired outcomes from AI systems, rather than tech/corporate giants, and companies are tasked with innovating towards this vision. For instance, in Indian healthcare start-ups, companies are developing AI technologies shaped by practical considerations such as geography, sustainable business models, and data constraints, rather than focusing solely on identifying key changes needed to improve the quality of healthcare.</p><p>As India garnered global attention for accelerating Digital Public Goods in recent years, the development and deployment of AI for and by Indians will further accelerate the country's digitalization growth exponentially!</p><p>That&#8217;s all from me.</p><p>I&#8217;m trying hard to get this out once a week, I promise to keep at it!</p><p>Until then, keep advocating for <strong>Responsible AI!</strong></p><div class="pullquote"><p><em>If you want to build Inclusive AI, you just need to start by employing the people you want to include. </em></p><p><strong>- Manu Chopra, Founder of Karya.in</strong></p></div>]]></content:encoded></item><item><title><![CDATA[The Global Index on Responsible AI is out!]]></title><description><![CDATA[My first Responsible AI (RAI) project / Newsletter #3]]></description><link>https://archanaatmakuri.substack.com/p/the-global-index-on-responsible-ai</link><guid isPermaLink="false">https://archanaatmakuri.substack.com/p/the-global-index-on-responsible-ai</guid><dc:creator><![CDATA[Archana Atmakuri]]></dc:creator><pubDate>Fri, 28 Jun 2024 04:45:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7c49243-6e66-4969-b7d8-c9bcc0f3100d_315x399.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!G7Sw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe77c8dbf-7fbc-49a7-80b7-1e0e707fc402_1480x380.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!G7Sw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe77c8dbf-7fbc-49a7-80b7-1e0e707fc402_1480x380.png 424w, https://substackcdn.com/image/fetch/$s_!G7Sw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe77c8dbf-7fbc-49a7-80b7-1e0e707fc402_1480x380.png 848w, https://substackcdn.com/image/fetch/$s_!G7Sw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe77c8dbf-7fbc-49a7-80b7-1e0e707fc402_1480x380.png 1272w, https://substackcdn.com/image/fetch/$s_!G7Sw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe77c8dbf-7fbc-49a7-80b7-1e0e707fc402_1480x380.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!G7Sw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe77c8dbf-7fbc-49a7-80b7-1e0e707fc402_1480x380.png" width="1456" height="374" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e77c8dbf-7fbc-49a7-80b7-1e0e707fc402_1480x380.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:374,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:112240,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!G7Sw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe77c8dbf-7fbc-49a7-80b7-1e0e707fc402_1480x380.png 424w, https://substackcdn.com/image/fetch/$s_!G7Sw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe77c8dbf-7fbc-49a7-80b7-1e0e707fc402_1480x380.png 848w, https://substackcdn.com/image/fetch/$s_!G7Sw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe77c8dbf-7fbc-49a7-80b7-1e0e707fc402_1480x380.png 1272w, https://substackcdn.com/image/fetch/$s_!G7Sw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe77c8dbf-7fbc-49a7-80b7-1e0e707fc402_1480x380.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Hello everyone, </p><p>It has been exciting last two weeks as the 1st Edition of the Global Index on Responsible AI (GIRAI) was out on 13 June 2024. </p><p>This is my first project in the RAI space. I had the opportunity to be part of the Index as a country researcher for Singapore. Over the span of 5 months from the end of 2023 through to March 2024, I carried out evidence-based research for Singapore, assessing the country&#8217;s AI capabilities from a human rights perspective. As an aspiring Responsible AI Expert, this project has indeed given me a headstart into the world of RAI. </p><p>GIRAI is the first tool to set globally-relevant benchmarks for responsible AI and assess them in countries around the world. </p><p><strong>GIRAI truly integrates Human rights at the centre of AI Governance. Here&#8217;s a primer on how RAI is measured in the Index. </strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!D_G-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4135d6fe-b09f-4145-bbef-d9b6ab267131_900x1248.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!D_G-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4135d6fe-b09f-4145-bbef-d9b6ab267131_900x1248.png 424w, https://substackcdn.com/image/fetch/$s_!D_G-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4135d6fe-b09f-4145-bbef-d9b6ab267131_900x1248.png 848w, https://substackcdn.com/image/fetch/$s_!D_G-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4135d6fe-b09f-4145-bbef-d9b6ab267131_900x1248.png 1272w, https://substackcdn.com/image/fetch/$s_!D_G-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4135d6fe-b09f-4145-bbef-d9b6ab267131_900x1248.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!D_G-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4135d6fe-b09f-4145-bbef-d9b6ab267131_900x1248.png" width="900" height="1248" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4135d6fe-b09f-4145-bbef-d9b6ab267131_900x1248.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1248,&quot;width&quot;:900,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:229787,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!D_G-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4135d6fe-b09f-4145-bbef-d9b6ab267131_900x1248.png 424w, https://substackcdn.com/image/fetch/$s_!D_G-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4135d6fe-b09f-4145-bbef-d9b6ab267131_900x1248.png 848w, https://substackcdn.com/image/fetch/$s_!D_G-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4135d6fe-b09f-4145-bbef-d9b6ab267131_900x1248.png 1272w, https://substackcdn.com/image/fetch/$s_!D_G-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4135d6fe-b09f-4145-bbef-d9b6ab267131_900x1248.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Setting aside the rankings for a moment, there are several reasons why this Index is pertinent. </p><ul><li><p><strong>It takes into consideration implications of AI on foundational human rights: </strong>More often concerns around bias, discrimination and deepfakes drive discussions on why we need to balance innovation with regulation. However, the less talked about core human rights aspects such as <strong>Labour Rights,</strong> <strong>Education, Cultural and Linguistic Diversity, Environmental Protection and Sustainability, Health and Well-Being </strong>are not at the forefront of discussions which are part of GIRAI. </p></li><li><p><strong>It is refreshing that the study originated from South Africa: </strong>Discussions led by Western countries tend to be Western-centric. Having an Index, originating from the Global South, focused exclusively on RAI, makes this Index unique. More importantly, it highlights the significant disparity between countries&#8217; Responsible AI developments.  </p></li></ul><p>Download the report <a href="https://coral-trista-52.tiiny.site/">here.</a></p><div><hr></div><p>Going back to my thoughts about the unique aspects covered in the Index: I&#8217;m sharing some of the interesting personal observations from New Zealand and Singapore&#8217;s AI developments. </p><h4><strong>#1: New Zealand&#8217;s approach to Indigenous Data Sovereignty. </strong></h4><p><em>&#8220; The <strong>indegenous Maori community</strong> in New Zealand both recorded and annotated 300 hours of audio data of the Te Reo Maori language. This is enough data to build tools such as spell-checkers, grammar assistants, speech recognition, and speech-to-text technology. However, although the data originated from the Maori speakers across New Zealand and was annotated and cleaned by the Maori community itself, Western based data sharing/open data initiatives meant that the Maori community had to explicitly prevent corporate entities from getting hold of the dataset. The community thus established the Maori Data Sovereignty Protocols in order to take control of their data and technology. Sharing their data, the Maori argued, is to invite  commercial actors to shape the future of their language through tools developed by those without connection to the language. </em></p><p><em>By not sharing their data, the Maori argue they are able to maintaining &#175; their autonomy and right to self-determination. They insist that, if any technology is to be built using such community sourced data, it must directly and primarily benefit the Maori people. Accordingly, such technology needs to be built by the Maori community itself since they hold the expert knowledge and experience of the language.&#8221; </em></p><h6>Source: https://dl.acm.org/doi/pdf/10.1145/3551624.3555290</h6><h4><strong>#2: Singapore&#8217;s style of regulating (not legally) AI</strong></h4><p>It is impressive to see Singapore government&#8217;s efforts to regulate AI. While most of Singapore govt&#8217;s regulatory efforts have been voluntary, this model is unique to Singapore. There are many frameworks and documents released by the Gov stating the importance of testing AI tools against the global ethical standards of deploying AI such as the <a href="https://www.mas.gov.sg/~/media/MAS/News%20and%20Publications/Monographs%20and%20Information%20Papers/FEAT%20Principles%20Final.pdf">Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore&#8217;s Financial Sector</a> and the Model AI Governance Framework. In conclusion, Singapore has a variety of <em>relevant sectoral and voluntary frameworks as well as binding regulation in other domains such as data protection and online safety.</em> Moreover, the development, integration and responsible governance of AI is a strategic priority across Singaporean policymaking. Given Singapore is an &#8220;Asian Democratic&#8221; country, even soft regulations will (mostly) be taken seriously. This approach is unique and works for Singapore. </p><div><hr></div><h3>Here are 3 fundamental mechanisms that every country must implement to ensure AI safety:</h3><ol><li><p><strong>Access to Redress </strong>(and Remedy): Have a mechanism for citizens to report incidents that impacted negatively due to AI. There is a huge gap at the moment in figuring out how best to establish remedy and redress channels nationally/domestically. </p></li><li><p><strong>Impact Assessments: </strong>Impact assessment can be defined as &#8220;<em>a structured process for considering the implications, for people and their environment, of proposed actions while there is still an opportunity to modify [or abandon] the proposed actions&#8221; if the impact they pose contains a potential or actual threat of harm. </em></p></li><li><p><strong>Human Oversight: </strong>So far, there are interesting approaches under the Human oversight aspects. First, as the European Union&#8217;s human-in-the-loop and human-in-command approaches which are different governance mechanisms for implementing the principle of human oversight and determination. The &#8216;human in the loop&#8217; is defined as the capability for humans to monitor every stage of the AI lifecycle and assume a supervisory role which can intervene on a needs basis only. Whereas human-in-command, refers to oversight of an AI system&#8217;s activity, including its economic, societal, legal and ethical impact, and therefore requires human decision-making around when to use an AI system in a given situation. </p></li></ol><p>In case you missed the report, you can find it <a href="https://coral-trista-52.tiiny.site/">here.</a></p><p>That&#8217;s all from me. </p><p>Meet you next week!</p><p>Until then, keep advocating for <strong>Responsible AI!</strong></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Responsible AI Frameworks: Where are we at?]]></title><description><![CDATA[Responsible AI: one aspect a week! / Newsletter #2]]></description><link>https://archanaatmakuri.substack.com/p/responsible-ai-frameworks-where-are</link><guid isPermaLink="false">https://archanaatmakuri.substack.com/p/responsible-ai-frameworks-where-are</guid><dc:creator><![CDATA[Archana Atmakuri]]></dc:creator><pubDate>Tue, 11 Jun 2024 04:45:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!J36t!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1702ade7-7551-4273-b38f-042c245d96bb_390x844.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello everyone!</p><p>Welcome to newsletter #2 on Responsible AI from a policy perspective - a topic I&#8217;m exploring in this newsletter. This week, I&#8217;m delving into the question: </p><p><strong>What are the current best practices in place to mitigate AI&#8217;s errors? </strong></p><p>I will be looking into the existing practices in place to tackle misfires from AI - and how beneficial they have been. These approaches tie in to the wider Responsible AI practices. </p><p>You may have come across Google&#8217;s artificial intelligence-powered bizzare search results that claimed that Barack Obama is a Muslim, told people to eat rocks etc, in the latest high-profile case of the company&#8217;s AI systems misfiring. </p><p>As one scholar on LLM put it, &#8220;<em>there&#8217;s a few problems here for the AI; one is finding a good source that&#8217;s not a joke, but another is interpreting what the source is saying correctly. This is something that AI systems have trouble doing, and it&#8217;s important to note that even when it does get a good source, it can still make errors.&#8221;</em></p><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!J36t!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1702ade7-7551-4273-b38f-042c245d96bb_390x844.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!J36t!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1702ade7-7551-4273-b38f-042c245d96bb_390x844.png 424w, https://substackcdn.com/image/fetch/$s_!J36t!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1702ade7-7551-4273-b38f-042c245d96bb_390x844.png 848w, https://substackcdn.com/image/fetch/$s_!J36t!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1702ade7-7551-4273-b38f-042c245d96bb_390x844.png 1272w, https://substackcdn.com/image/fetch/$s_!J36t!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1702ade7-7551-4273-b38f-042c245d96bb_390x844.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!J36t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1702ade7-7551-4273-b38f-042c245d96bb_390x844.png" width="390" height="844" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1702ade7-7551-4273-b38f-042c245d96bb_390x844.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:844,&quot;width&quot;:390,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!J36t!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1702ade7-7551-4273-b38f-042c245d96bb_390x844.png 424w, https://substackcdn.com/image/fetch/$s_!J36t!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1702ade7-7551-4273-b38f-042c245d96bb_390x844.png 848w, https://substackcdn.com/image/fetch/$s_!J36t!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1702ade7-7551-4273-b38f-042c245d96bb_390x844.png 1272w, https://substackcdn.com/image/fetch/$s_!J36t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1702ade7-7551-4273-b38f-042c245d96bb_390x844.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><p>The problem of <strong>hallucination</strong> in an AI system is always going to be a risk. To beat this, the same companies are rolling out their Responsible AI plans- most of them have similar approaches (listed below). Also, how effective have these approaches been?</p><h4><strong>1: <a href="https://openai.com/index/learning-from-human-preferences/">Learning from Human Preferences</a></strong></h4><p>This is a classic approach to having human-in-the-loop while deploying AI. In Open AI&#8217;s office, this approach was mentioned in their blogs released in 2017 where they have developed an algorithm which can infer what humans want by being told which of two proposed behaviors is better. To make the models safer, more helpful, and more aligned, Deep Mind uses an existing technique called <strong>Reinforcement Learning from Human Preferences (RLHP)</strong>- which on prompts submitted by customers to the API,&nbsp;human agents provide demonstrations of the desired model behavior, and rank several outputs from the models. </p><blockquote><p>&#128161; Techniques like reinforcement learning from human feedback, which incorporates feedback into an LLM&#8217;s training, can also help improve the quality of its answers.&nbsp;Similarly, LLMs could be trained specifically for the task of identifying when a question cannot be answered, and it could also be useful to instruct them to carefully assess the quality of a retrieved document before generating an answer. However, research suggests there are several foundational and technical limitations to RLHF fine tuning which make it difficult to develop robustly-aligned AI systems using it.</p></blockquote><p></p><h4>2: <a href="https://www.frontiermodelforum.org/uploads/2023/10/FMF-AI-Red-Teaming.pdf">Red Teaming</a></h4><p>Red teaming is a popular process used to test and improve AI models by deliberately challenging them to ensure they are safe and reliable. This involves simulating attacks or problematic scenarios to see how the model reacts. For example, testers might try to make the AI generate harmful, biased, or incorrect information, or try to extract sensitive data from it. By doing this, any weaknesses or flaws in the AI model can be identified. Once these problems are found, the AI is restrained with new data and instructions to correct its behavior and reinforce its defenses. This continuous process helps make the AI more robust and secure.</p><p>A similar concept to this is Jailbreads for LLMs via <a href="https://openreview.net/pdf?id=x3Ltqz1UFg">Persona Modulation</a>. The research paper explaining Persona Modulation explores deliberate manual/automated attacks, a general jailbreaking method, for state-of-the-art aligned LLMs model into adopting a specific personality that is likely to comply with harmful instructions</p><h4><strong>3. <a href="https://www.technologyreview.com/2023/11/06/1082996/the-inside-scoop-on-watermarking-and-content-authentication/">AI-labeling/Watermarking </a></strong></h4><p>A research paper by Meta on <a href="https://scontent.fwlg3-1.fna.fbcdn.net/v/t39.2365-6/432053537_1590740255013767_6682089902649073417_n.pdf?_nc_cat=108&amp;ccb=1-7&amp;_nc_sid=3c67a6&amp;_nc_ohc=yRx_R2axUrYQ7kNvgFea47F&amp;_nc_ht=scontent.fwlg3-1.fna&amp;oh=00_AYDIfoFHs4yV4oEeXA5Kj244Qf-BaaWc12U6MF-QTGknJA&amp;oe=666DB6E2">Watermarking Makes Language Models Radioactive</a> is a great start to understanding this concept. The intent of watermarking is simple- like in traditional ways of watermarking content, in the AI context- whether it is training data and the final text, image or video- leaves traces to the source which makes it easier to detect and much more reliable. But this approach will be successful provided:  the overall watermarking system must be designed in the entire lifecycle o the algorithm. However, further questions arise from this. What is the purpose of the mark, and what info should it carry? Who should be able to read the mark, and when? How should the lack of a mark, or the presence of multiple marks, be interpreted? </p><p>Moreover, researchers have found that Watermarks for AI-generated text are easy to remove, can be stolen and copied rendering them useless. </p><p>While these approaches are good, but have not proven to be the best to mitigate risks. Ultimately, we can draw one key lesson from these approaches: the best approach is one that involves a diverse set of stakeholders, especially humans. </p><div><hr></div><p>I&#8217;m excited to share that the Global Index on Responsible AI (GIRAI) report will be out this week and I will be sharing about my experience being part of the project in next week&#8217;s newsletter. </p><p>Until then, keep advocating for <strong>Responsible AI. </strong></p>]]></content:encoded></item><item><title><![CDATA[AI Governance: Bio wars and Chemical weapons]]></title><description><![CDATA[Responsible AI: one aspect a week! / Newsletter #1]]></description><link>https://archanaatmakuri.substack.com/p/ai-governance-bio-wars-and-chemical</link><guid isPermaLink="false">https://archanaatmakuri.substack.com/p/ai-governance-bio-wars-and-chemical</guid><dc:creator><![CDATA[Archana Atmakuri]]></dc:creator><pubDate>Tue, 28 May 2024 03:07:10 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F886b1245-3c03-4d08-848a-85b09da23ecf_918x562.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello everyone!</p><p>Welcome to my first newsletter on Responsible AI - a topic that's becoming increasingly important as artificial intelligence advances rapidly. As someone passionate about tech policy and ethical AI, I'm thrilled to share my thoughts and insights here. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://archanaatmakuri.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading (R) AI Newsletter! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>You've likely heard about deepfakes, voice synthesis, and other concerning AI use cases in the news. However, the implications go much deeper, especially in regions lacking infrastructure to deal with AI's negative impacts. That's why governments worldwide are coming together to establish guardrails for responsible AI development and deployment. </p><p>At the recent AI Safety Summit in Seoul held on 21-22 May 2024, the Seoul Ministerial Statement emphasised a global strategy focused on AI safety, innovation, and inclusivity. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!g35e!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F886b1245-3c03-4d08-848a-85b09da23ecf_918x562.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!g35e!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F886b1245-3c03-4d08-848a-85b09da23ecf_918x562.png 424w, https://substackcdn.com/image/fetch/$s_!g35e!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F886b1245-3c03-4d08-848a-85b09da23ecf_918x562.png 848w, https://substackcdn.com/image/fetch/$s_!g35e!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F886b1245-3c03-4d08-848a-85b09da23ecf_918x562.png 1272w, https://substackcdn.com/image/fetch/$s_!g35e!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F886b1245-3c03-4d08-848a-85b09da23ecf_918x562.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!g35e!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F886b1245-3c03-4d08-848a-85b09da23ecf_918x562.png" width="918" height="562" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/886b1245-3c03-4d08-848a-85b09da23ecf_918x562.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:562,&quot;width&quot;:918,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:87836,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!g35e!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F886b1245-3c03-4d08-848a-85b09da23ecf_918x562.png 424w, https://substackcdn.com/image/fetch/$s_!g35e!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F886b1245-3c03-4d08-848a-85b09da23ecf_918x562.png 848w, https://substackcdn.com/image/fetch/$s_!g35e!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F886b1245-3c03-4d08-848a-85b09da23ecf_918x562.png 1272w, https://substackcdn.com/image/fetch/$s_!g35e!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F886b1245-3c03-4d08-848a-85b09da23ecf_918x562.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h2>Some key takeaways relevant to Responsible AI:</h2><p>- <strong>International Approach</strong>: Formation of AI Safety Institutes in 10 countries to evaluate existing systems, conduct foundational research, and share information across governments. The goal is to accelerate the advancement of AI safety science through forging a common understanding of AI safety, aligning research efforts and establishing shared standards and testing methodologies. This enables a multinational approach to ensure the safe and responsible development of AI technologies.</p><p>- <strong>Recognising Global Threats of AI</strong>: Recognition that frontier AI capabilities could assist non-state actors with chemical/biological weapons, and therefore, the Ministers stressed on adherence to relevant international laws such as the Chemical Weapons Convention and Biological and Toxin Weapons Convention, UN Security Council Resolution 1540, and international human rights law.  </p><p>- <strong>Building an AI Ecosystem:</strong> Plans to develop proposals with industry, civil society, and academia on risk thresholds for discussion at the next AI Action Summit. Sixteen major tech companies, including Amazon, Google, and OpenAI, also signed the "Frontier AI Safety Commitments," committing to responsible AI practices.</p><h3>Drilling Down into AI's Dual-Use Danger: <em>Chemical/ Bio weapons</em></h3><p>Why are Governments rushing to have a global governance approach to <strong>bio or chemical weapons and AI?</strong></p><p>Consider this case: a group of chemists built ChemCrow, an LLM chemistry agent, designed to accomplish tasks across organic synthesis, drug discovery, and materials design. ChemCrow is able to perform complex operations when given simple text commands, such as &#8220;<em>plan and execute the synthesis of an insect repellent</em>&#8221;. </p><p>AI systems such as ChemCrow falling into the hands of malicious non-state groups who could potentially use it in developing, producing or acquiring chemical or biological weapons. One can imagine the existential risk to humanity, with catastrophic consequences. Which makes addressing this crucial and urgent. </p><div><hr></div><h4><strong>In case you missed this month&#8217;s latest developments&#8230;</strong></h4><ol><li><p>UN published the report "<a href="https://www.ohchr.org/sites/default/files/documents/issues/business/b-tech/taxonomy-GenAI-Human-Rights-Harms.pdf">Taxonomy of Human Rights Risks Connected to Generative AI</a>". </p></li><li><p><strong>Council of Europe adopted <a href="https://search.coe.int/cm#{%22CoEObjectId%22:[%220900001680afb11f%22],%22sort%22:[%22CoEValidationDate%20Descending%22]}">first international treaty on artificial intelligence</a>. </strong></p></li><li><p>New UN research reveals <strong><a href="https://asiapacific.unwomen.org/sites/default/files/2024-05/ap-c871-ai-research-report-2024-full.pdf">impact of AI and cybersecurity on women, peace and security </a></strong>in South-East Asia. </p></li></ol><div><hr></div><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Vp_h!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3350d8ed-7251-4cbb-bd6d-f9595b280520_1480x1062.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Vp_h!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3350d8ed-7251-4cbb-bd6d-f9595b280520_1480x1062.png 424w, https://substackcdn.com/image/fetch/$s_!Vp_h!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3350d8ed-7251-4cbb-bd6d-f9595b280520_1480x1062.png 848w, https://substackcdn.com/image/fetch/$s_!Vp_h!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3350d8ed-7251-4cbb-bd6d-f9595b280520_1480x1062.png 1272w, https://substackcdn.com/image/fetch/$s_!Vp_h!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3350d8ed-7251-4cbb-bd6d-f9595b280520_1480x1062.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Vp_h!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3350d8ed-7251-4cbb-bd6d-f9595b280520_1480x1062.png" width="1456" height="1045" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3350d8ed-7251-4cbb-bd6d-f9595b280520_1480x1062.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1045,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1020711,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Vp_h!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3350d8ed-7251-4cbb-bd6d-f9595b280520_1480x1062.png 424w, https://substackcdn.com/image/fetch/$s_!Vp_h!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3350d8ed-7251-4cbb-bd6d-f9595b280520_1480x1062.png 848w, https://substackcdn.com/image/fetch/$s_!Vp_h!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3350d8ed-7251-4cbb-bd6d-f9595b280520_1480x1062.png 1272w, https://substackcdn.com/image/fetch/$s_!Vp_h!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3350d8ed-7251-4cbb-bd6d-f9595b280520_1480x1062.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>On inclusive AI, someone said &#8220;You do not have to be an expert to talk about AI, because if that's the case, then the global majority will not be able to have a seat at the table because by design, the right seat or the seat for technologies is San Francisco, China or Dublin.&#8221; To democratise AI, we need diverse perspectives, so here I am! </p><p>The aim of this newsletter is to advocate the importance of Responsible AI development, deployment and its usage. To map Responsible AI in the wider AI Governance context. </p><p>So, stay tuned for analysis on latest developments, challenges, and opportunities in Responsible AI. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://archanaatmakuri.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading (R) AI Newsletter! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>