请问HN:如何将高级人工智能风险政策转化为开发任务,并确保其执行?

1作者: percfeg2 个月前原帖
我是一家受监管行业的开发负责人。我们越来越多地将生成式人工智能应用集成到我们的技术栈中,非常喜欢这些应用,但在确保这些应用在部署前符合内部人工智能风险政策方面遇到了挑战。通常,这些政策由治理、风险和合规(GRC)团队制定,因此非常高层次且以业务为导向,这使得它们难以转化为可操作的开发项目。这种模糊性也使得有效测试实施的控制措施是否真正执行了预期政策变得困难。 我想了解其他人是否面临类似的障碍,以及你们是如何应对这些问题的。具体来说: - 你们如何将抽象的人工智能政策转化为具体的、可测试的要求,以供开发团队使用? - 你们是在持续集成/持续交付(CI/CD)管道中自动执行这些特定于人工智能的政策,还是主要依赖于部署后的监控? - 你们为此使用了哪些具体的工具、框架或平台? - 在将人工智能风险管理/治理融入软件开发生命周期(SDLC)过程中,你们还遇到了哪些其他挑战? 提前感谢!
查看原文
I am a dev lead in a regulated industry. We&#x27;re increasingly integrating GenAI apps into our stack, love them, but running into challenges ensuring these apps are aligned with internal AI risk policies before they get deployed. Often these policies are written by GRC team and hence very high level and business oriented, which making them hard to translate into actionable dev items. This ambiguity also makes it hard to effectively test whether the implemented controls actually enforce the intended policy.<p>I&#x27;m interested to know if others are facing similar hurdles and how you are tackling them. Specifically:<p>- How do you turn abstract AI policies into specific, testable requirements for your development teams?<p>- Are you automating the enforcement of these AI specific policies within your CI&#x2F;CD pipelines or are you primarily relying on post deployment monitoring?<p>- What specific tools, frameworks, or platforms are you using for this purpose?<p>- What other challenges are you encountering in operationalise AI risk management&#x2F;governance in SDLC?<p>Thanks in advance!