How do we keep control of generative AI? (Deeploy)
Deeploy on ChatGPT-era generative AI: ethical risk, RLHF context, and practical levers — traceability, human-in-the-loop, explainability, and governance for high-impact use cases.
Published 14 December 2022 on Deeploy, this piece places ChatGPT in the wider shift toward generative AI — including RLHF, industry examples, and why broad ethical risk programmes (Reid Blackman’s Ethical Machines is cited) matter when the future of tooling is uncertain.
It ends with concrete angles for control: traceability, human-in-the-loop, explainability & feedback, and careful use where impact is high — the same problems runtime AI governance still exists to solve.
Full article: deeploy.ml.