Responsible AI refers to the practice of designing, developing, and deploying artificial intelligence with good intention to empower users and society at large.
It encompasses fairness, transparency, accountability, privacy, and security. By adhering to these principles, AI technology can be used in a way that is ethical and beneficial for everyone.
We believe in the power of AI to improve lives, and that's why we've created a comprehensive set of principles around responsible AI that we rigorously apply throughout the development and use of our own product. This includes ensuring fairness in our algorithms, transparency in how our decisions are made, and strong accountability measures to ensure our AI remains beneficial. These are just some of the many ways we are working to ensure AI is used for good:
Engaging with the chat to make a legitimate and lawful use of it for the purpose it was intended, such as obtaining information, recommendation to support your customers or technical support for your daily operations. It excludes any activities that violate laws or regulations, such as fraud, identity theft, or harassment.
Yes, you can create and share content as long as it is not offensive, discriminatory, defamatory, or infringing upon anyone's rights. Content that is in good taste and respectful of others is accepted.
Content is considered offensive or discriminatory if it promotes hatred, violence, or harm against individuals or groups based on race, ethnicity, religion, gender, sexual orientation, disability, or any other characteristic. When in doubt, it's best to be on the side of caution and choose not to share content that could be interpreted as harmful.
Malicious content is defined as content in form of malware, prompts or executable code inserted into the chat with the intent to gain access to Personal Identifiable Information (PII), prompts or infrastructure.
Inserting malicious content is strictly prohibited. Doing so can compromise the security and integrity of the service and the privacy of its users.
If this occurs, please notify the provider immediately. Your promptness in reporting such an incident is crucial for maintaining the security and stability of the service.
Yes, you must adhere to the code of conduct outlined by Microsoft for the Azure OpenAI Service, which includes guidelines for respectful and responsible communication, privacy, and security practices.
You shouldn't be transmitting any data that is restricted by data protection laws, confidentiality obligations, export restrictions, other statutory provisions, or third-party rights. This includes, but is not limited to, personally identifiable information (PII), client-identifying data (CID), and any personal data that you do not have the right to share.
Sharing prohibited data can result in a breach of data protection laws and may lead to legal consequences for you.
Yes, human oversight is crucial when using AI-generated content, particularly for critical tasks or when the content is intended for public dissemination or could have legal implications.
No, you should not expect the content generated by the AI service to be 100% accurate, complete, or error-free. The nature of generative AI means that while it can provide valuable information, its outputs should be treated as potentially fallible and verified accordingly.
The end user is fully responsible for any actions taken based on the content generated by the AI service. It is crucial to review and verify the information to ensure that AI-generated content complies with all applicable laws, as well as internal policies and procedures. This means you may need to review, adjust, modify, or delete the content before it is used or shared.