AI Safety and Regulation: A Call to ActionAs the new significant developments in the field of AI/ML have been happening since Nov 2022 with the release of ChatGPT, the other key point has started playing with how to manage AI safety, transparency, and bias in building the model, this also brings attention to the government regulator and even the head of states. How can we ensure that AI is safer for humans? Then another debate started about the ‘extinction of humans due to AI,’ and media outlets began purring text to the web pages and as news stories about this ‘threat’; an example of an article published by Time ‘An AI pause is humanity’s best bet for preventing Extinction’ to Forbes publishing ‘Will ChatGPT Lead To Extinction Or Elevation Of Humanity? A Chilling Answer’. The US government has released an ‘Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, ‘ the salient features of which are also released as ‘FACT SHEE: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.‘ Continuing with the safety aspect of AI, the UK Government had its first AI Safety Summit on Nov 1-2, and a policy paper was released under the name: ‘The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023.‘ G7 leaders announced the Hiroshima AI Process – International Code of Conduct. It targets a series of responsible practices to identify and mitigate risks across the AI development and deployment lifecycle, including through evaluations, information sharing, governance approaches, security procedures, and transparency measures. In the coming week, I will record a podcast for the Open Tech Talks podcast to cover this topic in detail. |
News & Updates…This week made history with the new AI features and products announced, fueling the technology revolution.
The Cloud: the backbone of the AI revolution
|