Dangerous capability evaluations are a crucial tool for AI governance. But without accurate threat models, they could give us a false sense of security.
I am uncertain how we can effectively measure and mitigate the risks of AI. We see the expansion of AI capabilities far exceeding the rate at which social and political restraints can keep up.
I don’t see any alternative here either. If one company or country restricts AI, another will take its place.
I am uncertain how we can effectively measure and mitigate the risks of AI. We see the expansion of AI capabilities far exceeding the rate at which social and political restraints can keep up.
I don’t see any alternative here either. If one company or country restricts AI, another will take its place.