Microsoft launched a correction method to rectify the inappropriate information generated by AI
Microsoft has launched a new feature called “Correction” to address the issue of AI hallucinations, where AI-generated content may present false or fabricated information
Microsoft has launched a new feature called “Correction” to address the issue of AI hallucinations, where AI-generated content may present false or fabricated information. This tool is designed to automatically fix inaccuracies in AI outputs by fact-checking them against verified sources. The feature is part of Microsoft’s Azure AI Content Safety system. It integrates with its “Groundedness Detection” system, identifying ungrounded or hallucinated content. The “Correction” tool can work with any text-generating AI model, including popular ones like Meta’s Llama and OpenAI’s GPT-4. Currently, the feature is available in preview mode.
It compares AI-generated text with reliable documents or transcripts to ensure accuracy. In addition, Microsoft has rolled out updates to enhance the security, safety, and privacy of AI systems. These updates include expanding the Secure Future Initiative (SFI), which emphasizes secure design, default settings, and operations. The company also introduced the Azure Open-AI Service Whisper model, which is particularly beneficial for industries dealing with sensitive data, ensuring privacy during AI inference processes.