While a federal law concerning AI regulations takes shape, the European Union is on track to pass its AI Act later this year. They are an important first step in ensuring responsible guardrails are established for AI, and they create a model for other governments to follow.” Status of AI Regulations in Other Countries Meta president of global affairs, Nick Clegg, said, “Meta welcomes this White House-led process, and we are pleased to make these voluntary commitments alongside others in the sector. Representatives from the seven companies confirmed their commitment to the initiative on their respective blogs. These include GPT-4, Claude 2, PaLM 2, Titan, and, in the case of image generation, DALL-E 2. The document applies only to generative AI tools and models more powerful than the current industry standards. The document necessitates periodic safety evaluations complete with the capabilities, limitations, and details of what constitutes appropriate and inappropriate use for all AI service versions.įinally, signatories would also need to support research & development initiatives to overcome major societal challenges, including climate change, early cancer detection and prevention, and combating cyber threats. ![]() To ensure trust, the company would institute provenance and/or watermarking systems for any audio or visual content created by any of their proprietary and publicly available AI tools. They would also incentivize third-party vulnerability detection and responsible disclosure through bug bounty programs, contests, or prizes. Under security commitments, companies agreed to establish external and internal threat detection programs. ![]() See More: OpenAI Faces Its First Serious Regulatory Turbulence Over ChatGPT Security ![]() The signatories would have to participate in a forum or establish a mechanism that oversees developing, advancing, and adopting shared standards and best practices for frontier AI safety. The companies have also committed to information-sharing efforts among each other and with the government. How Will Companies Self-Regulate AI Development? SafetyĪs part of the safety protocol, companies have committed to red-teaming efforts to eliminate societal risks and national security concerns, such as the tech’s applicability to developing biological, chemical, and radiological weapons cybersecurity risks (vulnerability discovery), bias or discrimination, and the risk of self-replication. This is why the document titled Ensuring Safe, Secure, and Trustworthy AI, signed by the seven companies, stands on three pillars: safety, security, and trust. However, it isn’t a stretch to term AI as a double-edged sword, considering its security and privacy ramifications, not to mention its tendency to spew mis or disinformation. 55% of CEO respondents said they are evaluating & experimenting with generative AI.Īn even higher number, 79%, believe generative AI will help improve efficiency, while 52% said it will increase growth opportunities. The Summer 2023 Fortune/Deloitte CEO Survey revealed that 37% of respondents, CEOs from 19 industries, said their companies are already implementing generative AI to some degree. Generative AI has taken the world by storm in the last nine months. Late last week, the White House confirmed that Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI are signatories to this document to ensure the “safe, secure, and transparent development of AI technology.” ![]() Seven companies engaged in artificial intelligence (AI) development have volunteered to self-regulate their tech.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |