The English version is AI translated.

Continue
Issues

11.2023 Office Talk

Security and compliance of generative AI

Far Eastern New Century Corporation / Jian Junru
播放语音
3992601        In recent years, generative AI technology has developed rapidly, and after the emergence of ChatGPT, it has attracted widespread global attention and is seen as a major breakthrough in the field of artificial intelligence. However, this technology requires the use of a large amount of data and information, which may pose a risk of intellectual property rights, human rights, or business secrets being leaked, and the generated results are difficult to distinguish between true and false. Many people even intentionally create false information. In order to properly manage information quality and risk, governments and technology giants around the world actively discuss and communicate how to use generative AI correctly and securely.

        The Rise and Risks of Generative AI

                Generative AI is a model based on deep learning, which has made remarkable achievements in fields such as natural language processing, image generation, and audio synthesis, and has advanced to support functions such as voice interaction and image recognition. The development of new technologies has promoted innovation and efficiency, bringing many opportunities, but at the same time, it is also accompanied by challenges of information security and compliance.

        Due to the ability of generative AI to automatically generate various types of information such as text, images, and audio, and to imitate human creativity, generating new information from data to provide feedback to users may pose risks such as information security and personal data protection. Different countries and regions have developed guidelines and regulations for this, making the issue of AI compliance more complex.

        International Comparison of Generative AI Guidelines

        The EU passed the draft "Artificial Intelligence Law" in June 2023, dividing AI systems into four different risk levels for different controls (such as ChatGPT being classified as "limited risk"). Require AI systems to be transparent and clearly indicate the content they generate, in order to prevent the use of this technology to spread false information. At the same time, if the AI system uses copyrighted materials during the training process, it must be notified in advance and respect the rights of copyright holders to ensure that their use is safe, transparent, traceable, non discriminatory, and environmentally friendly. The final version of the draft is expected to take effect by the end of 2023.

        Chinese Mainland also released the Interim Measures for the Management of Generative AI Services in July 2023, taking a positive attitude towards generative AI, emphasizing the responsibility of generative AI service providers, and encouraging the innovation, safe and legitimate application of this technology in various industries and fields, so as to promote the development of AI.

        The British and Australian governments have also proposed generative AI guidelines for civil servants; Members of Congress in the United States have proposed the concept of a "security innovation framework", which includes security, accountability, protection of freedom and democracy, and interpretability to ensure the security and democratic value of AI technology. Based on the legislative style of the United States, the policy-making process will inevitably face many different voices and multiple obstacles. However, we believe that both the US government and Congress will make every effort to overcome various challenges.

        Background and purpose of the government draft

        Faced with the rise of generative AI, Taiwan officially approved the draft reference guidelines for the use of generative AI by the Executive Yuan and its affiliated agencies on August 31, 2023, aiming to establish the responsibilities involved in using this technology and establish necessary security and internal control mechanisms to guide government agencies to use generative AI to improve administrative efficiency while reducing potential risks. This draft contains ten key points, the most important of which is to emphasize that this technology should not be used to create confidential documents, and to require business owners to maintain independent thinking and judgment abilities. When agencies use generative AI as a tool for executing business or providing service assistance, it must be appropriately disclosed to ensure that it does not threaten national security and core values. It is hoped to strike a balance between technological development and citizen rights protection.

        The Global Challenge of Generative AI Guidance

        The future of generative AI lies in wise regulation and responsible application, which requires the joint efforts of governments, industry, academia, and global internet citizens. By comparing the guidelines of different countries, understanding their attitudes towards generative AI, and continuously paying attention to the development of generative AI technology, we can better understand and master the ways to respond.

        Image source: freepik

        #

        
Back  Back To List
Comments(0)

Recommend

Events