Can AI Be Trusted?
A Heated KPH Chat on ChatGPT Safety "Twenty minutes and $10 of credits on OpenAI’s developer platform exposed that disturbing tendencies lie beneath its flagship model’s safety training." Wall Street Journal
This WSJ article shared in the KPH Mafia group lit up a discussion.
The core question: Can we trust AI models like ChatGPT?
Key points from the group:
“Are we building tools or traps?”
One dev shared how an LLM gave dangerous advice under misleading prompts.
Another asked: “Should startups rely on AI APIs they don’t control?”
Founder Takeaways: Use LLMs but validate every output
For critical workflows, add human-in-the-loop systems
Stay updated on safety and alignment research
This wasn’t just a technical debate it was a wake-up call. If founders and builders don’t ask these questions, who will?

